Jan 17 12:21:43.922259 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 12:21:43.922288 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:21:43.922304 kernel: BIOS-provided physical RAM map: Jan 17 12:21:43.922312 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 17 12:21:43.922320 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 17 12:21:43.922329 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 17 12:21:43.922338 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 17 12:21:43.922347 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 17 12:21:43.922354 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 17 12:21:43.922363 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 17 12:21:43.922382 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 17 12:21:43.922390 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 17 12:21:43.922399 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 17 12:21:43.922408 kernel: NX (Execute Disable) protection: active Jan 17 12:21:43.922418 kernel: APIC: Static calls initialized Jan 17 12:21:43.922431 kernel: SMBIOS 2.8 present. Jan 17 12:21:43.922441 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 17 12:21:43.922450 kernel: Hypervisor detected: KVM Jan 17 12:21:43.922459 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 12:21:43.922467 kernel: kvm-clock: using sched offset of 2342055139 cycles Jan 17 12:21:43.922475 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 12:21:43.922483 kernel: tsc: Detected 2794.748 MHz processor Jan 17 12:21:43.922490 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 12:21:43.922497 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 12:21:43.922504 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 17 12:21:43.922513 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 17 12:21:43.922520 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 12:21:43.922527 kernel: Using GB pages for direct mapping Jan 17 12:21:43.922534 kernel: ACPI: Early table checksum verification disabled Jan 17 12:21:43.922541 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 17 12:21:43.922548 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:21:43.922555 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:21:43.922562 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:21:43.922575 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 17 12:21:43.922585 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:21:43.922595 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:21:43.922604 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:21:43.922613 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:21:43.922622 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 17 12:21:43.922632 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 17 12:21:43.922646 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 17 12:21:43.922659 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 17 12:21:43.922668 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 17 12:21:43.922678 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 17 12:21:43.922688 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 17 12:21:43.922697 kernel: No NUMA configuration found Jan 17 12:21:43.922707 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 17 12:21:43.922716 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 17 12:21:43.922729 kernel: Zone ranges: Jan 17 12:21:43.922738 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 12:21:43.922748 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 17 12:21:43.922757 kernel: Normal empty Jan 17 12:21:43.922766 kernel: Movable zone start for each node Jan 17 12:21:43.922773 kernel: Early memory node ranges Jan 17 12:21:43.922780 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 17 12:21:43.922787 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 17 12:21:43.922794 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 17 12:21:43.922804 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:21:43.922811 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 17 12:21:43.922818 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 17 12:21:43.922825 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 12:21:43.922832 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 12:21:43.922839 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 12:21:43.922846 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 12:21:43.922853 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 12:21:43.922861 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 12:21:43.922870 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 12:21:43.922877 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 12:21:43.922884 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 12:21:43.922891 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 12:21:43.922898 kernel: TSC deadline timer available Jan 17 12:21:43.922905 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 17 12:21:43.922912 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 12:21:43.922919 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 17 12:21:43.922926 kernel: kvm-guest: setup PV sched yield Jan 17 12:21:43.922933 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 17 12:21:43.922942 kernel: Booting paravirtualized kernel on KVM Jan 17 12:21:43.922950 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 12:21:43.922957 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 17 12:21:43.922964 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 17 12:21:43.922971 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 17 12:21:43.922978 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 17 12:21:43.922985 kernel: kvm-guest: PV spinlocks enabled Jan 17 12:21:43.922992 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 12:21:43.923000 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:21:43.923010 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:21:43.923017 kernel: random: crng init done Jan 17 12:21:43.923025 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 12:21:43.923032 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:21:43.923039 kernel: Fallback order for Node 0: 0 Jan 17 12:21:43.923046 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 17 12:21:43.923053 kernel: Policy zone: DMA32 Jan 17 12:21:43.923060 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:21:43.923070 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 136900K reserved, 0K cma-reserved) Jan 17 12:21:43.923077 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 17 12:21:43.923084 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 12:21:43.923091 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 12:21:43.923108 kernel: Dynamic Preempt: voluntary Jan 17 12:21:43.923115 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:21:43.923123 kernel: rcu: RCU event tracing is enabled. Jan 17 12:21:43.923130 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 17 12:21:43.923138 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:21:43.923148 kernel: Rude variant of Tasks RCU enabled. Jan 17 12:21:43.923155 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:21:43.923162 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:21:43.923170 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 17 12:21:43.923177 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 17 12:21:43.923184 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:21:43.923191 kernel: Console: colour VGA+ 80x25 Jan 17 12:21:43.923198 kernel: printk: console [ttyS0] enabled Jan 17 12:21:43.923205 kernel: ACPI: Core revision 20230628 Jan 17 12:21:43.923215 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 17 12:21:43.923222 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 12:21:43.923229 kernel: x2apic enabled Jan 17 12:21:43.923236 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 12:21:43.923243 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 17 12:21:43.923251 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 17 12:21:43.923258 kernel: kvm-guest: setup PV IPIs Jan 17 12:21:43.923275 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 12:21:43.923283 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 17 12:21:43.923290 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 17 12:21:43.923297 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 17 12:21:43.923307 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 17 12:21:43.923314 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 17 12:21:43.923322 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 12:21:43.923329 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 12:21:43.923336 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 12:21:43.923346 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 17 12:21:43.923353 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 17 12:21:43.923361 kernel: RETBleed: Mitigation: untrained return thunk Jan 17 12:21:43.923369 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 12:21:43.923392 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 12:21:43.923403 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 17 12:21:43.923412 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 17 12:21:43.923420 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 17 12:21:43.923427 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 12:21:43.923439 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 12:21:43.923446 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 12:21:43.923454 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 12:21:43.923462 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 17 12:21:43.923472 kernel: Freeing SMP alternatives memory: 32K Jan 17 12:21:43.923482 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:21:43.923493 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:21:43.923503 kernel: landlock: Up and running. Jan 17 12:21:43.923516 kernel: SELinux: Initializing. Jan 17 12:21:43.923524 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:21:43.923534 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:21:43.923543 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 17 12:21:43.923556 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 12:21:43.923566 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 12:21:43.923576 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 12:21:43.923587 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 17 12:21:43.923597 kernel: ... version: 0 Jan 17 12:21:43.923610 kernel: ... bit width: 48 Jan 17 12:21:43.923618 kernel: ... generic registers: 6 Jan 17 12:21:43.923625 kernel: ... value mask: 0000ffffffffffff Jan 17 12:21:43.923633 kernel: ... max period: 00007fffffffffff Jan 17 12:21:43.923640 kernel: ... fixed-purpose events: 0 Jan 17 12:21:43.923647 kernel: ... event mask: 000000000000003f Jan 17 12:21:43.923655 kernel: signal: max sigframe size: 1776 Jan 17 12:21:43.923662 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:21:43.923670 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:21:43.923679 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:21:43.923687 kernel: smpboot: x86: Booting SMP configuration: Jan 17 12:21:43.923694 kernel: .... node #0, CPUs: #1 #2 #3 Jan 17 12:21:43.923702 kernel: smp: Brought up 1 node, 4 CPUs Jan 17 12:21:43.923709 kernel: smpboot: Max logical packages: 1 Jan 17 12:21:43.923717 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 17 12:21:43.923724 kernel: devtmpfs: initialized Jan 17 12:21:43.923732 kernel: x86/mm: Memory block size: 128MB Jan 17 12:21:43.923739 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:21:43.923747 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 17 12:21:43.923756 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:21:43.923764 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:21:43.923771 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:21:43.923779 kernel: audit: type=2000 audit(1737116503.087:1): state=initialized audit_enabled=0 res=1 Jan 17 12:21:43.923786 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:21:43.923793 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 12:21:43.923801 kernel: cpuidle: using governor menu Jan 17 12:21:43.923808 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:21:43.923816 kernel: dca service started, version 1.12.1 Jan 17 12:21:43.923825 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 17 12:21:43.923833 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 17 12:21:43.923840 kernel: PCI: Using configuration type 1 for base access Jan 17 12:21:43.923848 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 12:21:43.923855 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:21:43.923863 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:21:43.923870 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:21:43.923878 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:21:43.923888 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:21:43.923895 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:21:43.923902 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:21:43.923910 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:21:43.923917 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:21:43.923925 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 12:21:43.923932 kernel: ACPI: Interpreter enabled Jan 17 12:21:43.923939 kernel: ACPI: PM: (supports S0 S3 S5) Jan 17 12:21:43.923947 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 12:21:43.923955 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 12:21:43.923964 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 12:21:43.923972 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 17 12:21:43.923979 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 12:21:43.924198 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:21:43.924349 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 17 12:21:43.924547 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 17 12:21:43.924559 kernel: PCI host bridge to bus 0000:00 Jan 17 12:21:43.924689 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 12:21:43.924801 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 12:21:43.924912 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 12:21:43.925026 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 17 12:21:43.925151 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 17 12:21:43.925284 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 17 12:21:43.925413 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 12:21:43.925573 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 17 12:21:43.925708 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 17 12:21:43.925829 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 17 12:21:43.925952 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 17 12:21:43.926077 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 17 12:21:43.926211 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 12:21:43.926392 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 12:21:43.926524 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 17 12:21:43.926652 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 17 12:21:43.926774 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 17 12:21:43.926932 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 17 12:21:43.927072 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 17 12:21:43.927220 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 17 12:21:43.927348 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 17 12:21:43.927602 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 12:21:43.927728 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 17 12:21:43.927848 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 17 12:21:43.927966 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 17 12:21:43.928104 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 17 12:21:43.928272 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 17 12:21:43.928425 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 17 12:21:43.928562 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 17 12:21:43.928723 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 17 12:21:43.928858 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 17 12:21:43.928986 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 17 12:21:43.929119 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 17 12:21:43.929134 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 12:21:43.929142 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 12:21:43.929150 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 12:21:43.929157 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 12:21:43.929165 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 17 12:21:43.929173 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 17 12:21:43.929181 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 17 12:21:43.929188 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 17 12:21:43.929196 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 17 12:21:43.929206 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 17 12:21:43.929214 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 17 12:21:43.929221 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 17 12:21:43.929229 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 17 12:21:43.929237 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 17 12:21:43.929244 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 17 12:21:43.929252 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 17 12:21:43.929259 kernel: iommu: Default domain type: Translated Jan 17 12:21:43.929267 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 12:21:43.929277 kernel: PCI: Using ACPI for IRQ routing Jan 17 12:21:43.929285 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 12:21:43.929293 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 17 12:21:43.929300 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 17 12:21:43.929443 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 17 12:21:43.929599 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 17 12:21:43.929735 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 12:21:43.929745 kernel: vgaarb: loaded Jan 17 12:21:43.929753 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 17 12:21:43.929765 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 17 12:21:43.929773 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 12:21:43.929780 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:21:43.929788 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:21:43.929796 kernel: pnp: PnP ACPI init Jan 17 12:21:43.929932 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 17 12:21:43.929944 kernel: pnp: PnP ACPI: found 6 devices Jan 17 12:21:43.929952 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 12:21:43.929963 kernel: NET: Registered PF_INET protocol family Jan 17 12:21:43.929971 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 12:21:43.929978 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 12:21:43.929986 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:21:43.929994 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:21:43.930001 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 12:21:43.930009 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 12:21:43.930017 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:21:43.930027 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:21:43.930035 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:21:43.930042 kernel: NET: Registered PF_XDP protocol family Jan 17 12:21:43.930166 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 12:21:43.930291 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 12:21:43.930443 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 12:21:43.930585 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 17 12:21:43.930732 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 17 12:21:43.930878 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 17 12:21:43.930898 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:21:43.930909 kernel: Initialise system trusted keyrings Jan 17 12:21:43.930919 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 12:21:43.930930 kernel: Key type asymmetric registered Jan 17 12:21:43.930940 kernel: Asymmetric key parser 'x509' registered Jan 17 12:21:43.930950 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 12:21:43.930961 kernel: io scheduler mq-deadline registered Jan 17 12:21:43.930971 kernel: io scheduler kyber registered Jan 17 12:21:43.930982 kernel: io scheduler bfq registered Jan 17 12:21:43.930996 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 12:21:43.931007 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 17 12:21:43.931018 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 17 12:21:43.931028 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 17 12:21:43.931039 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:21:43.931050 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 12:21:43.931061 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 12:21:43.931072 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 12:21:43.931082 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 12:21:43.931107 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 12:21:43.931270 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 17 12:21:43.931449 kernel: rtc_cmos 00:04: registered as rtc0 Jan 17 12:21:43.931616 kernel: rtc_cmos 00:04: setting system clock to 2025-01-17T12:21:43 UTC (1737116503) Jan 17 12:21:43.931764 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 17 12:21:43.931780 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 17 12:21:43.931790 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:21:43.931801 kernel: Segment Routing with IPv6 Jan 17 12:21:43.931816 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:21:43.931827 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:21:43.931837 kernel: Key type dns_resolver registered Jan 17 12:21:43.931847 kernel: IPI shorthand broadcast: enabled Jan 17 12:21:43.931858 kernel: sched_clock: Marking stable (626002251, 121355016)->(813598859, -66241592) Jan 17 12:21:43.931869 kernel: registered taskstats version 1 Jan 17 12:21:43.931879 kernel: Loading compiled-in X.509 certificates Jan 17 12:21:43.931890 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 12:21:43.931900 kernel: Key type .fscrypt registered Jan 17 12:21:43.931914 kernel: Key type fscrypt-provisioning registered Jan 17 12:21:43.931924 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:21:43.931935 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:21:43.931946 kernel: ima: No architecture policies found Jan 17 12:21:43.931956 kernel: clk: Disabling unused clocks Jan 17 12:21:43.931967 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 12:21:43.931977 kernel: Write protecting the kernel read-only data: 36864k Jan 17 12:21:43.931987 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 12:21:43.931997 kernel: Run /init as init process Jan 17 12:21:43.932011 kernel: with arguments: Jan 17 12:21:43.932022 kernel: /init Jan 17 12:21:43.932032 kernel: with environment: Jan 17 12:21:43.932042 kernel: HOME=/ Jan 17 12:21:43.932053 kernel: TERM=linux Jan 17 12:21:43.932063 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:21:43.932076 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:21:43.932090 systemd[1]: Detected virtualization kvm. Jan 17 12:21:43.932116 systemd[1]: Detected architecture x86-64. Jan 17 12:21:43.932127 systemd[1]: Running in initrd. Jan 17 12:21:43.932138 systemd[1]: No hostname configured, using default hostname. Jan 17 12:21:43.932149 systemd[1]: Hostname set to . Jan 17 12:21:43.932161 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:21:43.932172 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:21:43.932183 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:21:43.932194 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:21:43.932210 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:21:43.932237 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:21:43.932255 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:21:43.932267 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:21:43.932281 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:21:43.932295 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:21:43.932307 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:21:43.932318 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:21:43.932330 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:21:43.932341 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:21:43.932352 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:21:43.932364 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:21:43.932392 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:21:43.932409 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:21:43.932420 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:21:43.932432 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:21:43.932443 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:21:43.932455 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:21:43.932466 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:21:43.932477 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:21:43.932489 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:21:43.932503 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:21:43.932515 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:21:43.932526 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:21:43.932538 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:21:43.932550 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:21:43.932561 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:21:43.932574 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:21:43.932588 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:21:43.932603 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:21:43.932622 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:21:43.932669 systemd-journald[192]: Collecting audit messages is disabled. Jan 17 12:21:43.932708 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:21:43.932724 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:21:43.932739 systemd-journald[192]: Journal started Jan 17 12:21:43.932774 systemd-journald[192]: Runtime Journal (/run/log/journal/4faf820c8d0441d9825f14dfd5a18ab1) is 6.0M, max 48.4M, 42.3M free. Jan 17 12:21:43.937567 systemd-modules-load[194]: Inserted module 'overlay' Jan 17 12:21:43.969734 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:21:43.969764 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:21:43.972169 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 17 12:21:43.973208 kernel: Bridge firewalling registered Jan 17 12:21:43.973854 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:21:43.976831 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:21:43.982584 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:21:43.985479 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:21:43.987548 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:21:43.988893 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:21:44.001951 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:21:44.003345 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:21:44.015723 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:21:44.018209 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:21:44.019706 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:21:44.038541 dracut-cmdline[230]: dracut-dracut-053 Jan 17 12:21:44.042406 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:21:44.054165 systemd-resolved[221]: Positive Trust Anchors: Jan 17 12:21:44.054192 systemd-resolved[221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:21:44.054231 systemd-resolved[221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:21:44.057152 systemd-resolved[221]: Defaulting to hostname 'linux'. Jan 17 12:21:44.058255 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:21:44.064812 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:21:44.140443 kernel: SCSI subsystem initialized Jan 17 12:21:44.149420 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:21:44.160410 kernel: iscsi: registered transport (tcp) Jan 17 12:21:44.182413 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:21:44.182483 kernel: QLogic iSCSI HBA Driver Jan 17 12:21:44.237599 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:21:44.251697 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:21:44.283323 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:21:44.283442 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:21:44.283455 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:21:44.327429 kernel: raid6: avx2x4 gen() 21322 MB/s Jan 17 12:21:44.344434 kernel: raid6: avx2x2 gen() 22759 MB/s Jan 17 12:21:44.361811 kernel: raid6: avx2x1 gen() 18295 MB/s Jan 17 12:21:44.361899 kernel: raid6: using algorithm avx2x2 gen() 22759 MB/s Jan 17 12:21:44.379659 kernel: raid6: .... xor() 18880 MB/s, rmw enabled Jan 17 12:21:44.379723 kernel: raid6: using avx2x2 recovery algorithm Jan 17 12:21:44.401416 kernel: xor: automatically using best checksumming function avx Jan 17 12:21:44.565425 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:21:44.579139 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:21:44.595595 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:21:44.609066 systemd-udevd[412]: Using default interface naming scheme 'v255'. Jan 17 12:21:44.613980 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:21:44.622535 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:21:44.637519 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Jan 17 12:21:44.673370 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:21:44.693635 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:21:44.759071 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:21:44.768667 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:21:44.780482 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:21:44.783813 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:21:44.786371 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:21:44.787585 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:21:44.792530 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:21:44.806702 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:21:44.825429 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 17 12:21:44.839361 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 17 12:21:44.839558 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 12:21:44.839578 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:21:44.839588 kernel: GPT:9289727 != 19775487 Jan 17 12:21:44.839598 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:21:44.839608 kernel: GPT:9289727 != 19775487 Jan 17 12:21:44.839617 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:21:44.839627 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:21:44.825600 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:21:44.841523 kernel: libata version 3.00 loaded. Jan 17 12:21:44.825701 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:21:44.827580 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:21:44.829128 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:21:44.829181 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:21:44.832734 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:21:44.875594 kernel: ahci 0000:00:1f.2: version 3.0 Jan 17 12:21:44.891153 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 17 12:21:44.891167 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 17 12:21:44.891316 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 17 12:21:44.891503 kernel: scsi host0: ahci Jan 17 12:21:44.891655 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 12:21:44.891666 kernel: scsi host1: ahci Jan 17 12:21:44.891822 kernel: AES CTR mode by8 optimization enabled Jan 17 12:21:44.891834 kernel: scsi host2: ahci Jan 17 12:21:44.891975 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (462) Jan 17 12:21:44.891986 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (470) Jan 17 12:21:44.891997 kernel: scsi host3: ahci Jan 17 12:21:44.892152 kernel: scsi host4: ahci Jan 17 12:21:44.892292 kernel: scsi host5: ahci Jan 17 12:21:44.892477 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 17 12:21:44.892489 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 17 12:21:44.892499 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 17 12:21:44.892512 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 17 12:21:44.892524 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 17 12:21:44.892537 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 17 12:21:44.845792 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:21:44.903081 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 12:21:44.937731 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:21:44.945356 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 12:21:44.948093 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 12:21:44.981093 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:21:44.987164 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 12:21:45.022513 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:21:45.024539 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:21:45.049598 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:21:45.194898 disk-uuid[556]: Primary Header is updated. Jan 17 12:21:45.194898 disk-uuid[556]: Secondary Entries is updated. Jan 17 12:21:45.194898 disk-uuid[556]: Secondary Header is updated. Jan 17 12:21:45.203289 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 17 12:21:45.203314 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 17 12:21:45.203325 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 17 12:21:45.203335 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 17 12:21:45.203346 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:21:45.204422 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 17 12:21:45.204448 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 17 12:21:45.205398 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 17 12:21:45.206481 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:21:45.206494 kernel: ata3.00: applying bridge limits Jan 17 12:21:45.207398 kernel: ata3.00: configured for UDMA/100 Jan 17 12:21:45.208415 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 17 12:21:45.252878 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 17 12:21:45.266531 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 12:21:45.266548 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 17 12:21:46.209408 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:21:46.210431 disk-uuid[565]: The operation has completed successfully. Jan 17 12:21:46.243228 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:21:46.243368 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:21:46.269530 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:21:46.275224 sh[592]: Success Jan 17 12:21:46.288449 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 17 12:21:46.324627 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:21:46.339355 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:21:46.342654 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:21:46.354738 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 12:21:46.354779 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:21:46.354795 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:21:46.355856 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:21:46.356613 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:21:46.364926 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:21:46.366340 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:21:46.377578 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:21:46.379803 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:21:46.389663 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:21:46.389701 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:21:46.389712 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:21:46.393475 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:21:46.404366 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:21:46.408411 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:21:46.510846 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:21:46.540688 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:21:46.563257 systemd-networkd[770]: lo: Link UP Jan 17 12:21:46.563270 systemd-networkd[770]: lo: Gained carrier Jan 17 12:21:46.564935 systemd-networkd[770]: Enumeration completed Jan 17 12:21:46.565072 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:21:46.565349 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:21:46.565354 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:21:46.566133 systemd[1]: Reached target network.target - Network. Jan 17 12:21:46.567342 systemd-networkd[770]: eth0: Link UP Jan 17 12:21:46.567346 systemd-networkd[770]: eth0: Gained carrier Jan 17 12:21:46.567353 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:21:46.603435 systemd-networkd[770]: eth0: DHCPv4 address 10.0.0.151/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 12:21:46.714123 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:21:46.734596 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:21:46.790063 ignition[775]: Ignition 2.19.0 Jan 17 12:21:46.790075 ignition[775]: Stage: fetch-offline Jan 17 12:21:46.790117 ignition[775]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:21:46.790128 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:21:46.790234 ignition[775]: parsed url from cmdline: "" Jan 17 12:21:46.790239 ignition[775]: no config URL provided Jan 17 12:21:46.790244 ignition[775]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:21:46.790254 ignition[775]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:21:46.790282 ignition[775]: op(1): [started] loading QEMU firmware config module Jan 17 12:21:46.790287 ignition[775]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 17 12:21:46.798755 ignition[775]: op(1): [finished] loading QEMU firmware config module Jan 17 12:21:46.800437 ignition[775]: parsing config with SHA512: 69513adf573fde0cabbd590c04ceae8ee68ad8f4cf14e25fe22fcab39cc0d855936b1d278cee197391ee96e15ad2e0b501d11af3ee2190fa0450ed6bfc7bafbf Jan 17 12:21:46.802738 unknown[775]: fetched base config from "system" Jan 17 12:21:46.802750 unknown[775]: fetched user config from "qemu" Jan 17 12:21:46.803175 ignition[775]: fetch-offline: fetch-offline passed Jan 17 12:21:46.803445 systemd-resolved[221]: Detected conflict on linux IN A 10.0.0.151 Jan 17 12:21:46.803237 ignition[775]: Ignition finished successfully Jan 17 12:21:46.803455 systemd-resolved[221]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. Jan 17 12:21:46.805561 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:21:46.807011 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 17 12:21:46.813569 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:21:46.827585 ignition[785]: Ignition 2.19.0 Jan 17 12:21:46.827596 ignition[785]: Stage: kargs Jan 17 12:21:46.848903 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:21:46.827771 ignition[785]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:21:46.827782 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:21:46.828434 ignition[785]: kargs: kargs passed Jan 17 12:21:46.828476 ignition[785]: Ignition finished successfully Jan 17 12:21:46.864542 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:21:46.879113 ignition[793]: Ignition 2.19.0 Jan 17 12:21:46.879125 ignition[793]: Stage: disks Jan 17 12:21:46.879302 ignition[793]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:21:46.879317 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:21:46.879936 ignition[793]: disks: disks passed Jan 17 12:21:46.882498 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:21:46.879977 ignition[793]: Ignition finished successfully Jan 17 12:21:46.883158 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:21:46.884885 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:21:46.885212 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:21:46.885797 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:21:46.886210 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:21:46.905711 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:21:46.919844 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 12:21:46.927940 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:21:46.939603 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:21:47.028407 kernel: EXT4-fs (vda9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 12:21:47.029328 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:21:47.030852 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:21:47.040468 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:21:47.042304 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:21:47.043665 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 12:21:47.048979 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (812) Jan 17 12:21:47.043705 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:21:47.055965 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:21:47.055981 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:21:47.055992 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:21:47.056003 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:21:47.043726 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:21:47.050873 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:21:47.057129 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:21:47.060043 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:21:47.099841 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:21:47.104584 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:21:47.109336 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:21:47.114959 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:21:47.208294 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:21:47.216629 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:21:47.217931 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:21:47.225411 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:21:47.242943 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:21:47.248185 ignition[925]: INFO : Ignition 2.19.0 Jan 17 12:21:47.248185 ignition[925]: INFO : Stage: mount Jan 17 12:21:47.250070 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:21:47.250070 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:21:47.250070 ignition[925]: INFO : mount: mount passed Jan 17 12:21:47.250070 ignition[925]: INFO : Ignition finished successfully Jan 17 12:21:47.255823 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:21:47.260560 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:21:47.354097 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:21:47.366624 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:21:47.376145 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (938) Jan 17 12:21:47.376181 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:21:47.376192 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:21:47.377270 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:21:47.380427 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:21:47.382528 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:21:47.402739 ignition[955]: INFO : Ignition 2.19.0 Jan 17 12:21:47.402739 ignition[955]: INFO : Stage: files Jan 17 12:21:47.404677 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:21:47.404677 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:21:47.407414 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:21:47.409081 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:21:47.409081 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:21:47.413119 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:21:47.414802 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:21:47.416723 unknown[955]: wrote ssh authorized keys file for user: core Jan 17 12:21:47.417977 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:21:47.420570 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:21:47.422793 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:21:47.422793 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:21:47.422793 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:21:47.422793 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:21:47.422793 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:21:47.422793 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:21:47.422793 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 17 12:21:47.759583 systemd-networkd[770]: eth0: Gained IPv6LL Jan 17 12:21:47.805290 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 17 12:21:48.173703 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:21:48.173703 ignition[955]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jan 17 12:21:48.177591 ignition[955]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 12:21:48.177591 ignition[955]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 12:21:48.177591 ignition[955]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jan 17 12:21:48.177591 ignition[955]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jan 17 12:21:48.205707 ignition[955]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 12:21:48.212506 ignition[955]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 12:21:48.217418 ignition[955]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jan 17 12:21:48.219134 ignition[955]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:21:48.221147 ignition[955]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:21:48.223063 ignition[955]: INFO : files: files passed Jan 17 12:21:48.223885 ignition[955]: INFO : Ignition finished successfully Jan 17 12:21:48.227233 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:21:48.234718 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:21:48.238430 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:21:48.240044 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:21:48.240193 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:21:48.247784 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Jan 17 12:21:48.250831 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:21:48.250831 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:21:48.264018 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:21:48.253330 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:21:48.264649 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:21:48.278511 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:21:48.306540 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:21:48.306665 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:21:48.308941 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:21:48.311001 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:21:48.311442 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:21:48.312221 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:21:48.335209 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:21:48.347653 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:21:48.358399 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:21:48.359100 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:21:48.359731 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:21:48.410444 ignition[1010]: INFO : Ignition 2.19.0 Jan 17 12:21:48.410444 ignition[1010]: INFO : Stage: umount Jan 17 12:21:48.410444 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:21:48.410444 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:21:48.410444 ignition[1010]: INFO : umount: umount passed Jan 17 12:21:48.410444 ignition[1010]: INFO : Ignition finished successfully Jan 17 12:21:48.360135 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:21:48.360302 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:21:48.361154 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:21:48.361793 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:21:48.362214 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:21:48.362817 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:21:48.363179 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:21:48.363682 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:21:48.363999 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:21:48.364413 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:21:48.364736 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:21:48.365082 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:21:48.365274 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:21:48.365428 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:21:48.366174 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:21:48.366413 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:21:48.366775 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:21:48.366903 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:21:48.367232 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:21:48.367342 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:21:48.367875 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:21:48.367996 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:21:48.368419 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:21:48.368745 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:21:48.374536 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:21:48.375106 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:21:48.375698 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:21:48.376264 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:21:48.376367 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:21:48.376929 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:21:48.377023 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:21:48.377280 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:21:48.377406 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:21:48.377907 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:21:48.378021 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:21:48.379149 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:21:48.380500 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:21:48.380850 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:21:48.380957 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:21:48.381354 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:21:48.381463 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:21:48.385804 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:21:48.385944 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:21:48.397400 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:21:48.397550 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:21:48.398281 systemd[1]: Stopped target network.target - Network. Jan 17 12:21:48.398698 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:21:48.398757 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:21:48.399131 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:21:48.399182 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:21:48.399670 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:21:48.399725 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:21:48.400013 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:21:48.400063 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:21:48.400512 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:21:48.400931 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:21:48.408604 systemd-networkd[770]: eth0: DHCPv6 lease lost Jan 17 12:21:48.409057 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:21:48.409210 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:21:48.414348 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:21:48.414861 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:21:48.415011 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:21:48.418543 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:21:48.418607 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:21:48.427505 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:21:48.429022 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:21:48.429088 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:21:48.431652 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:21:48.431715 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:21:48.433926 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:21:48.433994 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:21:48.436163 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:21:48.436212 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:21:48.438291 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:21:48.457407 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:21:48.457649 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:21:48.458906 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:21:48.458969 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:21:48.461166 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:21:48.461232 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:21:48.463692 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:21:48.463759 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:21:48.464459 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:21:48.464512 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:21:48.465306 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:21:48.465356 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:21:48.493548 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:21:48.493869 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:21:48.493927 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:21:48.494251 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 12:21:48.494301 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:21:48.494736 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:21:48.494786 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:21:48.495076 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:21:48.495132 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:21:48.501325 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:21:48.501455 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:21:48.574651 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:21:48.574812 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:21:48.575658 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:21:48.575722 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:21:48.582431 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:21:48.582571 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:21:48.583369 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:21:48.587661 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:21:48.601923 systemd[1]: Switching root. Jan 17 12:21:48.634314 systemd-journald[192]: Journal stopped Jan 17 12:21:50.017354 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jan 17 12:21:50.017977 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:21:50.017994 kernel: SELinux: policy capability open_perms=1 Jan 17 12:21:50.018009 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:21:50.018022 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:21:50.018034 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:21:50.018048 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:21:50.018059 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:21:50.018070 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:21:50.018082 kernel: audit: type=1403 audit(1737116509.123:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:21:50.018094 systemd[1]: Successfully loaded SELinux policy in 46.382ms. Jan 17 12:21:50.018120 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.258ms. Jan 17 12:21:50.018135 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:21:50.018147 systemd[1]: Detected virtualization kvm. Jan 17 12:21:50.018931 systemd[1]: Detected architecture x86-64. Jan 17 12:21:50.018961 systemd[1]: Detected first boot. Jan 17 12:21:50.018974 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:21:50.018986 zram_generator::config[1054]: No configuration found. Jan 17 12:21:50.019001 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:21:50.019013 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 12:21:50.019025 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 12:21:50.019037 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 12:21:50.019050 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:21:50.019062 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:21:50.019074 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:21:50.019088 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:21:50.019108 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:21:50.019120 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:21:50.019133 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:21:50.019145 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:21:50.019157 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:21:50.019169 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:21:50.019181 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:21:50.019194 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:21:50.019206 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:21:50.019220 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:21:50.019232 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 12:21:50.019244 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:21:50.019256 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 12:21:50.019273 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 12:21:50.019285 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 12:21:50.019297 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:21:50.019311 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:21:50.019323 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:21:50.019336 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:21:50.019349 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:21:50.019366 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:21:50.019406 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:21:50.019425 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:21:50.019438 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:21:50.019450 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:21:50.019461 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:21:50.019476 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:21:50.019488 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:21:50.019500 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:21:50.019513 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:50.019525 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:21:50.019537 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:21:50.019548 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:21:50.019561 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:21:50.019575 systemd[1]: Reached target machines.target - Containers. Jan 17 12:21:50.019587 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:21:50.019599 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:21:50.019611 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:21:50.019623 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:21:50.019635 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:21:50.019647 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:21:50.019660 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:21:50.019674 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:21:50.019686 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:21:50.019698 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:21:50.019710 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 12:21:50.019722 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 12:21:50.019733 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 12:21:50.019745 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 12:21:50.019757 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:21:50.019769 kernel: fuse: init (API version 7.39) Jan 17 12:21:50.019783 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:21:50.019794 kernel: loop: module loaded Jan 17 12:21:50.019806 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:21:50.019818 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:21:50.019829 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:21:50.019859 systemd-journald[1117]: Collecting audit messages is disabled. Jan 17 12:21:50.019882 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 12:21:50.019894 systemd[1]: Stopped verity-setup.service. Jan 17 12:21:50.019909 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:50.019920 systemd-journald[1117]: Journal started Jan 17 12:21:50.019950 systemd-journald[1117]: Runtime Journal (/run/log/journal/4faf820c8d0441d9825f14dfd5a18ab1) is 6.0M, max 48.4M, 42.3M free. Jan 17 12:21:49.781409 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:21:49.797476 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 12:21:49.797915 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 12:21:50.025776 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:21:50.026715 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:21:50.028384 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:21:50.029950 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:21:50.031397 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:21:50.033144 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:21:50.034805 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:21:50.036541 kernel: ACPI: bus type drm_connector registered Jan 17 12:21:50.036951 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:21:50.039277 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:21:50.039604 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:21:50.041495 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:21:50.041704 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:21:50.043296 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:21:50.043522 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:21:50.044992 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:21:50.045185 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:21:50.046942 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:21:50.047143 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:21:50.048699 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:21:50.048891 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:21:50.050483 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:21:50.051983 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:21:50.053794 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:21:50.066759 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:21:50.080489 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:21:50.082764 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:21:50.083886 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:21:50.083916 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:21:50.085914 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:21:50.088267 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:21:50.092955 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:21:50.094127 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:21:50.096880 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:21:50.100977 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:21:50.102269 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:21:50.105026 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:21:50.107039 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:21:50.108595 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:21:50.114520 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:21:50.120082 systemd-journald[1117]: Time spent on flushing to /var/log/journal/4faf820c8d0441d9825f14dfd5a18ab1 is 13.747ms for 936 entries. Jan 17 12:21:50.120082 systemd-journald[1117]: System Journal (/var/log/journal/4faf820c8d0441d9825f14dfd5a18ab1) is 8.0M, max 195.6M, 187.6M free. Jan 17 12:21:50.165840 systemd-journald[1117]: Received client request to flush runtime journal. Jan 17 12:21:50.165892 kernel: loop0: detected capacity change from 0 to 142488 Jan 17 12:21:50.124571 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:21:50.127421 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:21:50.129241 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:21:50.131839 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:21:50.134067 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:21:50.135721 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:21:50.140791 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:21:50.154794 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:21:50.159225 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:21:50.169798 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:21:50.176004 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 17 12:21:50.176884 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:21:50.178688 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:21:50.180167 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Jan 17 12:21:50.180220 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Jan 17 12:21:50.182888 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:21:50.189631 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:21:50.197535 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:21:50.199635 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:21:50.200246 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:21:50.205419 kernel: loop1: detected capacity change from 0 to 211296 Jan 17 12:21:50.228005 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:21:50.238568 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:21:50.244403 kernel: loop2: detected capacity change from 0 to 140768 Jan 17 12:21:50.263578 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Jan 17 12:21:50.264092 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Jan 17 12:21:50.271220 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:21:50.292654 kernel: loop3: detected capacity change from 0 to 142488 Jan 17 12:21:50.304404 kernel: loop4: detected capacity change from 0 to 211296 Jan 17 12:21:50.312408 kernel: loop5: detected capacity change from 0 to 140768 Jan 17 12:21:50.324394 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 17 12:21:50.325102 (sd-merge)[1195]: Merged extensions into '/usr'. Jan 17 12:21:50.329657 systemd[1]: Reloading requested from client PID 1153 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:21:50.329758 systemd[1]: Reloading... Jan 17 12:21:50.397414 zram_generator::config[1220]: No configuration found. Jan 17 12:21:50.452869 ldconfig[1148]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:21:50.526510 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:21:50.584815 systemd[1]: Reloading finished in 254 ms. Jan 17 12:21:50.620539 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:21:50.622346 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:21:50.639770 systemd[1]: Starting ensure-sysext.service... Jan 17 12:21:50.642590 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:21:50.649541 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:21:50.649560 systemd[1]: Reloading... Jan 17 12:21:50.668883 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:21:50.669302 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:21:50.670417 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:21:50.670715 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Jan 17 12:21:50.670789 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Jan 17 12:21:50.675220 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:21:50.675235 systemd-tmpfiles[1260]: Skipping /boot Jan 17 12:21:50.689604 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:21:50.689746 systemd-tmpfiles[1260]: Skipping /boot Jan 17 12:21:50.720412 zram_generator::config[1290]: No configuration found. Jan 17 12:21:50.937122 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:21:50.987637 systemd[1]: Reloading finished in 337 ms. Jan 17 12:21:51.005966 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:21:51.026172 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:21:51.031737 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:21:51.034110 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:21:51.037894 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:21:51.040680 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:21:51.045835 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:51.046065 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:21:51.057984 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:21:51.063828 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:21:51.068726 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:21:51.070309 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:21:51.070450 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:51.071418 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:21:51.071629 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:21:51.076653 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:21:51.076914 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:21:51.078994 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:21:51.079222 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:21:51.085534 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:21:51.091554 augenrules[1352]: No rules Jan 17 12:21:51.091550 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:51.091789 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:21:51.102090 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:21:51.105431 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:21:51.108399 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:21:51.109950 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:21:51.114195 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:21:51.115460 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:51.117209 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:21:51.119667 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:21:51.121885 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:21:51.124163 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:21:51.126085 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:21:51.126314 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:21:51.128059 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:21:51.128256 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:21:51.130125 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:21:51.130349 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:21:51.142969 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:51.143172 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:21:51.161662 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:21:51.164220 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:21:51.166573 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:21:51.170812 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:21:51.172537 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:21:51.175824 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:21:51.180710 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:21:51.182563 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:21:51.182700 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:21:51.184631 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:21:51.187145 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:21:51.187443 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:21:51.189533 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:21:51.189875 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:21:51.191964 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:21:51.192203 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:21:51.194270 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:21:51.194634 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:21:51.197004 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:21:51.203127 systemd[1]: Finished ensure-sysext.service. Jan 17 12:21:51.208342 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:21:51.208495 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:21:51.211914 systemd-udevd[1376]: Using default interface naming scheme 'v255'. Jan 17 12:21:51.216564 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 12:21:51.245728 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:21:51.252878 systemd-resolved[1329]: Positive Trust Anchors: Jan 17 12:21:51.254224 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:21:51.254266 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:21:51.257628 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:21:51.264290 systemd-resolved[1329]: Defaulting to hostname 'linux'. Jan 17 12:21:51.266660 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:21:51.268101 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:21:51.288692 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 12:21:51.301396 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1396) Jan 17 12:21:51.315899 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 12:21:51.317704 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:21:51.337287 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:21:51.347554 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:21:51.351105 systemd-networkd[1391]: lo: Link UP Jan 17 12:21:51.351112 systemd-networkd[1391]: lo: Gained carrier Jan 17 12:21:51.355596 systemd-networkd[1391]: Enumeration completed Jan 17 12:21:51.355684 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:21:51.356996 systemd[1]: Reached target network.target - Network. Jan 17 12:21:51.357862 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:21:51.357867 systemd-networkd[1391]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:21:51.358680 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:21:51.358708 systemd-networkd[1391]: eth0: Link UP Jan 17 12:21:51.358712 systemd-networkd[1391]: eth0: Gained carrier Jan 17 12:21:51.358721 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:21:51.364597 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:21:51.370668 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:21:51.373563 systemd-networkd[1391]: eth0: DHCPv4 address 10.0.0.151/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 12:21:51.374252 systemd-timesyncd[1385]: Network configuration changed, trying to establish connection. Jan 17 12:21:51.375171 systemd-timesyncd[1385]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 17 12:21:51.375238 systemd-timesyncd[1385]: Initial clock synchronization to Fri 2025-01-17 12:21:51.637229 UTC. Jan 17 12:21:51.384400 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 17 12:21:51.392393 kernel: ACPI: button: Power Button [PWRF] Jan 17 12:21:51.392457 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 17 12:21:51.395194 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 17 12:21:51.395411 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 17 12:21:51.399389 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 17 12:21:51.464403 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 12:21:51.465935 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:21:51.529846 kernel: kvm_amd: TSC scaling supported Jan 17 12:21:51.529915 kernel: kvm_amd: Nested Virtualization enabled Jan 17 12:21:51.529931 kernel: kvm_amd: Nested Paging enabled Jan 17 12:21:51.529960 kernel: kvm_amd: LBR virtualization supported Jan 17 12:21:51.531686 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 17 12:21:51.531723 kernel: kvm_amd: Virtual GIF supported Jan 17 12:21:51.565416 kernel: EDAC MC: Ver: 3.0.0 Jan 17 12:21:51.599117 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:21:51.607128 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:21:51.618552 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:21:51.643952 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:21:51.676149 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:21:51.677840 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:21:51.679044 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:21:51.680299 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:21:51.681629 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:21:51.683190 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:21:51.684475 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:21:51.685797 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:21:51.687109 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:21:51.687147 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:21:51.688130 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:21:51.689963 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:21:51.693046 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:21:51.709968 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:21:51.717002 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:21:51.718659 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:21:51.719871 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:21:51.720875 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:21:51.721881 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:21:51.721922 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:21:51.722959 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:21:51.725259 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:21:51.728472 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:21:51.732181 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:21:51.760256 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:21:51.762261 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:21:51.764749 jq[1438]: false Jan 17 12:21:51.762577 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:21:51.766425 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:21:51.771700 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:21:51.784278 extend-filesystems[1439]: Found loop3 Jan 17 12:21:51.789241 extend-filesystems[1439]: Found loop4 Jan 17 12:21:51.789241 extend-filesystems[1439]: Found loop5 Jan 17 12:21:51.789241 extend-filesystems[1439]: Found sr0 Jan 17 12:21:51.789241 extend-filesystems[1439]: Found vda Jan 17 12:21:51.789241 extend-filesystems[1439]: Found vda1 Jan 17 12:21:51.789241 extend-filesystems[1439]: Found vda2 Jan 17 12:21:51.789241 extend-filesystems[1439]: Found vda3 Jan 17 12:21:51.789241 extend-filesystems[1439]: Found usr Jan 17 12:21:51.789241 extend-filesystems[1439]: Found vda4 Jan 17 12:21:51.789241 extend-filesystems[1439]: Found vda6 Jan 17 12:21:51.789241 extend-filesystems[1439]: Found vda7 Jan 17 12:21:51.789241 extend-filesystems[1439]: Found vda9 Jan 17 12:21:51.789241 extend-filesystems[1439]: Checking size of /dev/vda9 Jan 17 12:21:51.785652 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:21:51.806345 dbus-daemon[1437]: [system] SELinux support is enabled Jan 17 12:21:51.788004 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 12:21:51.791635 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:21:51.829058 update_engine[1453]: I20250117 12:21:51.821091 1453 main.cc:92] Flatcar Update Engine starting Jan 17 12:21:51.829058 update_engine[1453]: I20250117 12:21:51.822561 1453 update_check_scheduler.cc:74] Next update check in 9m32s Jan 17 12:21:51.793980 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:21:51.829411 jq[1455]: true Jan 17 12:21:51.809170 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:21:51.811363 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:21:51.816325 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:21:51.819946 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:21:51.820199 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:21:51.820664 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:21:51.820915 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:21:51.824184 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:21:51.824419 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:21:51.834542 jq[1458]: true Jan 17 12:21:51.839005 (ntainerd)[1459]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:21:51.847362 extend-filesystems[1439]: Resized partition /dev/vda9 Jan 17 12:21:51.851535 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:21:51.851559 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:21:51.859315 systemd-logind[1444]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 12:21:51.859336 systemd-logind[1444]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 12:21:51.879505 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1396) Jan 17 12:21:51.879575 extend-filesystems[1471]: resize2fs 1.47.1 (20-May-2024) Jan 17 12:21:51.881144 systemd-logind[1444]: New seat seat0. Jan 17 12:21:51.881307 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:21:51.881328 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:21:51.897708 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:21:51.912604 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:21:51.923632 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:21:52.002974 sshd_keygen[1456]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:21:52.031956 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:21:52.042630 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:21:52.050155 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:21:52.050461 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:21:52.054916 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:21:52.079432 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 17 12:21:52.083357 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:21:52.096774 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:21:52.103758 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 12:21:52.105088 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:21:52.149678 locksmithd[1486]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:21:52.206560 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 17 12:21:52.213369 bash[1492]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:21:52.216313 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:21:52.219312 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 17 12:21:52.231126 extend-filesystems[1471]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 12:21:52.231126 extend-filesystems[1471]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 17 12:21:52.231126 extend-filesystems[1471]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 17 12:21:52.236912 extend-filesystems[1439]: Resized filesystem in /dev/vda9 Jan 17 12:21:52.232365 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:21:52.232621 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:21:52.420876 containerd[1459]: time="2025-01-17T12:21:52.420709227Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:21:52.432503 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:21:52.447657 systemd[1]: Started sshd@0-10.0.0.151:22-10.0.0.1:41414.service - OpenSSH per-connection server daemon (10.0.0.1:41414). Jan 17 12:21:52.449425 containerd[1459]: time="2025-01-17T12:21:52.448976050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:21:52.452204 containerd[1459]: time="2025-01-17T12:21:52.452154289Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:21:52.452204 containerd[1459]: time="2025-01-17T12:21:52.452197374Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:21:52.452273 containerd[1459]: time="2025-01-17T12:21:52.452228944Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:21:52.452502 containerd[1459]: time="2025-01-17T12:21:52.452481289Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:21:52.452531 containerd[1459]: time="2025-01-17T12:21:52.452505097Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:21:52.452616 containerd[1459]: time="2025-01-17T12:21:52.452594507Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:21:52.452640 containerd[1459]: time="2025-01-17T12:21:52.452614207Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:21:52.452876 containerd[1459]: time="2025-01-17T12:21:52.452844803Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:21:52.452876 containerd[1459]: time="2025-01-17T12:21:52.452866533Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:21:52.452935 containerd[1459]: time="2025-01-17T12:21:52.452881184Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:21:52.452935 containerd[1459]: time="2025-01-17T12:21:52.452892172Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:21:52.453027 containerd[1459]: time="2025-01-17T12:21:52.453006694Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:21:52.453493 containerd[1459]: time="2025-01-17T12:21:52.453285466Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:21:52.453493 containerd[1459]: time="2025-01-17T12:21:52.453484038Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:21:52.453563 containerd[1459]: time="2025-01-17T12:21:52.453505611Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:21:52.454195 containerd[1459]: time="2025-01-17T12:21:52.453654000Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:21:52.454195 containerd[1459]: time="2025-01-17T12:21:52.453747909Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:21:52.551473 sshd[1519]: Accepted publickey for core from 10.0.0.1 port 41414 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:21:52.554017 sshd[1519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:52.563628 systemd-logind[1444]: New session 1 of user core. Jan 17 12:21:52.564974 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:21:52.571517 containerd[1459]: time="2025-01-17T12:21:52.571472787Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:21:52.571559 containerd[1459]: time="2025-01-17T12:21:52.571545061Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:21:52.571579 containerd[1459]: time="2025-01-17T12:21:52.571566997Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:21:52.571598 containerd[1459]: time="2025-01-17T12:21:52.571587909Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:21:52.571629 containerd[1459]: time="2025-01-17T12:21:52.571605602Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:21:52.571812 containerd[1459]: time="2025-01-17T12:21:52.571789625Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:21:52.572669 containerd[1459]: time="2025-01-17T12:21:52.572291480Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:21:52.572669 containerd[1459]: time="2025-01-17T12:21:52.572473621Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:21:52.572669 containerd[1459]: time="2025-01-17T12:21:52.572498806Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:21:52.572669 containerd[1459]: time="2025-01-17T12:21:52.572517172Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:21:52.572669 containerd[1459]: time="2025-01-17T12:21:52.572536077Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:21:52.572669 containerd[1459]: time="2025-01-17T12:21:52.572554515Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:21:52.572669 containerd[1459]: time="2025-01-17T12:21:52.572571422Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:21:52.572669 containerd[1459]: time="2025-01-17T12:21:52.572611528Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:21:52.572669 containerd[1459]: time="2025-01-17T12:21:52.572633081Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:21:52.572669 containerd[1459]: time="2025-01-17T12:21:52.572653857Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:21:52.572858 containerd[1459]: time="2025-01-17T12:21:52.572786880Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:21:52.572880 containerd[1459]: time="2025-01-17T12:21:52.572853142Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:21:52.572923 containerd[1459]: time="2025-01-17T12:21:52.572897863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:21:52.572947 containerd[1459]: time="2025-01-17T12:21:52.572931212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:21:52.572974 containerd[1459]: time="2025-01-17T12:21:52.572949847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:21:52.573001 containerd[1459]: time="2025-01-17T12:21:52.572968864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:21:52.573001 containerd[1459]: time="2025-01-17T12:21:52.572989103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:21:52.573038 containerd[1459]: time="2025-01-17T12:21:52.573007925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:21:52.573038 containerd[1459]: time="2025-01-17T12:21:52.573024522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:21:52.573089 containerd[1459]: time="2025-01-17T12:21:52.573052335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:21:52.573089 containerd[1459]: time="2025-01-17T12:21:52.573073650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:21:52.573129 containerd[1459]: time="2025-01-17T12:21:52.573106202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:21:52.573129 containerd[1459]: time="2025-01-17T12:21:52.573121599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:21:52.573172 containerd[1459]: time="2025-01-17T12:21:52.573137420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:21:52.573172 containerd[1459]: time="2025-01-17T12:21:52.573156002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:21:52.573214 containerd[1459]: time="2025-01-17T12:21:52.573199936Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:21:52.573274 containerd[1459]: time="2025-01-17T12:21:52.573233740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:21:52.573359 containerd[1459]: time="2025-01-17T12:21:52.573262568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:21:52.573489 containerd[1459]: time="2025-01-17T12:21:52.573334904Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:21:52.573595 containerd[1459]: time="2025-01-17T12:21:52.573533372Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:21:52.573595 containerd[1459]: time="2025-01-17T12:21:52.573552969Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:21:52.573595 containerd[1459]: time="2025-01-17T12:21:52.573565199Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:21:52.573595 containerd[1459]: time="2025-01-17T12:21:52.573580214Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:21:52.573595 containerd[1459]: time="2025-01-17T12:21:52.573589939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:21:52.573724 containerd[1459]: time="2025-01-17T12:21:52.573603350Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:21:52.573724 containerd[1459]: time="2025-01-17T12:21:52.573615425Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:21:52.573724 containerd[1459]: time="2025-01-17T12:21:52.573625958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:21:52.573996 containerd[1459]: time="2025-01-17T12:21:52.573913556Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:21:52.573996 containerd[1459]: time="2025-01-17T12:21:52.573980895Z" level=info msg="Connect containerd service" Jan 17 12:21:52.574319 containerd[1459]: time="2025-01-17T12:21:52.574018248Z" level=info msg="using legacy CRI server" Jan 17 12:21:52.574319 containerd[1459]: time="2025-01-17T12:21:52.574025719Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:21:52.574319 containerd[1459]: time="2025-01-17T12:21:52.574175865Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:21:52.574967 containerd[1459]: time="2025-01-17T12:21:52.574923806Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:21:52.575116 containerd[1459]: time="2025-01-17T12:21:52.575039507Z" level=info msg="Start subscribing containerd event" Jan 17 12:21:52.575157 containerd[1459]: time="2025-01-17T12:21:52.575131556Z" level=info msg="Start recovering state" Jan 17 12:21:52.575258 containerd[1459]: time="2025-01-17T12:21:52.575229139Z" level=info msg="Start event monitor" Jan 17 12:21:52.575296 containerd[1459]: time="2025-01-17T12:21:52.575258587Z" level=info msg="Start snapshots syncer" Jan 17 12:21:52.575296 containerd[1459]: time="2025-01-17T12:21:52.575279126Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:21:52.575296 containerd[1459]: time="2025-01-17T12:21:52.575287352Z" level=info msg="Start streaming server" Jan 17 12:21:52.575545 containerd[1459]: time="2025-01-17T12:21:52.575508997Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:21:52.575606 containerd[1459]: time="2025-01-17T12:21:52.575586145Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:21:52.575676 containerd[1459]: time="2025-01-17T12:21:52.575656496Z" level=info msg="containerd successfully booted in 0.156141s" Jan 17 12:21:52.577722 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:21:52.579784 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:21:52.614528 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:21:52.630708 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:21:52.646400 (systemd)[1525]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:21:52.820875 systemd[1525]: Queued start job for default target default.target. Jan 17 12:21:52.830770 systemd[1525]: Created slice app.slice - User Application Slice. Jan 17 12:21:52.830802 systemd[1525]: Reached target paths.target - Paths. Jan 17 12:21:52.830821 systemd[1525]: Reached target timers.target - Timers. Jan 17 12:21:52.832533 systemd[1525]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:21:52.847543 systemd[1525]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:21:52.847721 systemd[1525]: Reached target sockets.target - Sockets. Jan 17 12:21:52.847743 systemd[1525]: Reached target basic.target - Basic System. Jan 17 12:21:52.847812 systemd[1525]: Reached target default.target - Main User Target. Jan 17 12:21:52.847851 systemd[1525]: Startup finished in 191ms. Jan 17 12:21:52.848490 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:21:52.851529 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:21:52.918010 systemd[1]: Started sshd@1-10.0.0.151:22-10.0.0.1:41424.service - OpenSSH per-connection server daemon (10.0.0.1:41424). Jan 17 12:21:52.972005 sshd[1536]: Accepted publickey for core from 10.0.0.1 port 41424 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:21:52.974047 sshd[1536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:52.978718 systemd-logind[1444]: New session 2 of user core. Jan 17 12:21:52.989587 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:21:53.049803 sshd[1536]: pam_unix(sshd:session): session closed for user core Jan 17 12:21:53.057352 systemd[1]: sshd@1-10.0.0.151:22-10.0.0.1:41424.service: Deactivated successfully. Jan 17 12:21:53.059731 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 12:21:53.062044 systemd-logind[1444]: Session 2 logged out. Waiting for processes to exit. Jan 17 12:21:53.063937 systemd[1]: Started sshd@2-10.0.0.151:22-10.0.0.1:41440.service - OpenSSH per-connection server daemon (10.0.0.1:41440). Jan 17 12:21:53.066711 systemd-logind[1444]: Removed session 2. Jan 17 12:21:53.106191 sshd[1543]: Accepted publickey for core from 10.0.0.1 port 41440 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:21:53.108278 sshd[1543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:53.113211 systemd-logind[1444]: New session 3 of user core. Jan 17 12:21:53.127889 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:21:53.135566 systemd-networkd[1391]: eth0: Gained IPv6LL Jan 17 12:21:53.139895 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:21:53.141938 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:21:53.165739 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 17 12:21:53.168837 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:21:53.171524 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:21:53.187849 sshd[1543]: pam_unix(sshd:session): session closed for user core Jan 17 12:21:53.192973 systemd[1]: sshd@2-10.0.0.151:22-10.0.0.1:41440.service: Deactivated successfully. Jan 17 12:21:53.195781 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 17 12:21:53.196008 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 17 12:21:53.198036 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 12:21:53.199615 systemd-logind[1444]: Session 3 logged out. Waiting for processes to exit. Jan 17 12:21:53.201581 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:21:53.204321 systemd-logind[1444]: Removed session 3. Jan 17 12:21:53.204361 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:21:54.346355 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:21:54.348320 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:21:54.350288 systemd[1]: Startup finished in 774ms (kernel) + 5.402s (initrd) + 5.271s (userspace) = 11.448s. Jan 17 12:21:54.367856 (kubelet)[1571]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:21:55.158636 kubelet[1571]: E0117 12:21:55.158525 1571 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:21:55.163810 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:21:55.164007 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:21:55.164382 systemd[1]: kubelet.service: Consumed 1.776s CPU time. Jan 17 12:22:03.363588 systemd[1]: Started sshd@3-10.0.0.151:22-10.0.0.1:45656.service - OpenSSH per-connection server daemon (10.0.0.1:45656). Jan 17 12:22:03.402412 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 45656 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:22:03.404002 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:22:03.407961 systemd-logind[1444]: New session 4 of user core. Jan 17 12:22:03.422517 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:22:03.480047 sshd[1585]: pam_unix(sshd:session): session closed for user core Jan 17 12:22:03.491624 systemd[1]: sshd@3-10.0.0.151:22-10.0.0.1:45656.service: Deactivated successfully. Jan 17 12:22:03.493685 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 12:22:03.495568 systemd-logind[1444]: Session 4 logged out. Waiting for processes to exit. Jan 17 12:22:03.505720 systemd[1]: Started sshd@4-10.0.0.151:22-10.0.0.1:45672.service - OpenSSH per-connection server daemon (10.0.0.1:45672). Jan 17 12:22:03.507170 systemd-logind[1444]: Removed session 4. Jan 17 12:22:03.538101 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 45672 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:22:03.539713 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:22:03.543926 systemd-logind[1444]: New session 5 of user core. Jan 17 12:22:03.550520 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:22:03.601923 sshd[1592]: pam_unix(sshd:session): session closed for user core Jan 17 12:22:03.609027 systemd[1]: sshd@4-10.0.0.151:22-10.0.0.1:45672.service: Deactivated successfully. Jan 17 12:22:03.610752 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:22:03.612493 systemd-logind[1444]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:22:03.626870 systemd[1]: Started sshd@5-10.0.0.151:22-10.0.0.1:45684.service - OpenSSH per-connection server daemon (10.0.0.1:45684). Jan 17 12:22:03.628280 systemd-logind[1444]: Removed session 5. Jan 17 12:22:03.660327 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 45684 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:22:03.661935 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:22:03.666272 systemd-logind[1444]: New session 6 of user core. Jan 17 12:22:03.676575 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:22:03.731679 sshd[1599]: pam_unix(sshd:session): session closed for user core Jan 17 12:22:03.740870 systemd[1]: sshd@5-10.0.0.151:22-10.0.0.1:45684.service: Deactivated successfully. Jan 17 12:22:03.742447 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:22:03.744012 systemd-logind[1444]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:22:03.745198 systemd[1]: Started sshd@6-10.0.0.151:22-10.0.0.1:45696.service - OpenSSH per-connection server daemon (10.0.0.1:45696). Jan 17 12:22:03.745999 systemd-logind[1444]: Removed session 6. Jan 17 12:22:03.781752 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 45696 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:22:03.783154 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:22:03.786817 systemd-logind[1444]: New session 7 of user core. Jan 17 12:22:03.796546 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:22:03.857656 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 12:22:03.858103 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:22:03.878844 sudo[1609]: pam_unix(sudo:session): session closed for user root Jan 17 12:22:03.880825 sshd[1606]: pam_unix(sshd:session): session closed for user core Jan 17 12:22:03.899574 systemd[1]: sshd@6-10.0.0.151:22-10.0.0.1:45696.service: Deactivated successfully. Jan 17 12:22:03.901432 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:22:03.903131 systemd-logind[1444]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:22:03.904812 systemd[1]: Started sshd@7-10.0.0.151:22-10.0.0.1:45702.service - OpenSSH per-connection server daemon (10.0.0.1:45702). Jan 17 12:22:03.905617 systemd-logind[1444]: Removed session 7. Jan 17 12:22:03.942127 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 45702 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:22:03.943653 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:22:03.947349 systemd-logind[1444]: New session 8 of user core. Jan 17 12:22:03.962500 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 12:22:04.016364 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 12:22:04.016720 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:22:04.020451 sudo[1618]: pam_unix(sudo:session): session closed for user root Jan 17 12:22:04.026283 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 12:22:04.026637 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:22:04.045588 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 12:22:04.047320 auditctl[1621]: No rules Jan 17 12:22:04.047734 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 12:22:04.047944 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 12:22:04.050481 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:22:04.080101 augenrules[1639]: No rules Jan 17 12:22:04.081922 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:22:04.083174 sudo[1617]: pam_unix(sudo:session): session closed for user root Jan 17 12:22:04.085097 sshd[1614]: pam_unix(sshd:session): session closed for user core Jan 17 12:22:04.103543 systemd[1]: sshd@7-10.0.0.151:22-10.0.0.1:45702.service: Deactivated successfully. Jan 17 12:22:04.105716 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 12:22:04.107085 systemd-logind[1444]: Session 8 logged out. Waiting for processes to exit. Jan 17 12:22:04.119653 systemd[1]: Started sshd@8-10.0.0.151:22-10.0.0.1:45714.service - OpenSSH per-connection server daemon (10.0.0.1:45714). Jan 17 12:22:04.120641 systemd-logind[1444]: Removed session 8. Jan 17 12:22:04.151681 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 45714 ssh2: RSA SHA256:SlDwm7Or6/NzPo2pwmoc3QpDgnxlCMQ0MaN4S0v55gM Jan 17 12:22:04.153143 sshd[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:22:04.157076 systemd-logind[1444]: New session 9 of user core. Jan 17 12:22:04.163512 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 12:22:04.217013 sudo[1650]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:22:04.217475 sudo[1650]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:22:04.237664 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 17 12:22:04.254584 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 17 12:22:04.254876 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 17 12:22:04.738758 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:22:04.738989 systemd[1]: kubelet.service: Consumed 1.776s CPU time. Jan 17 12:22:04.755788 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:22:04.774439 systemd[1]: Reloading requested from client PID 1697 ('systemctl') (unit session-9.scope)... Jan 17 12:22:04.774460 systemd[1]: Reloading... Jan 17 12:22:04.861407 zram_generator::config[1736]: No configuration found. Jan 17 12:22:05.067077 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:22:05.143855 systemd[1]: Reloading finished in 368 ms. Jan 17 12:22:05.191310 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 12:22:05.191436 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 12:22:05.191703 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:22:05.193239 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:22:05.336837 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:22:05.341511 (kubelet)[1783]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:22:05.387791 kubelet[1783]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:22:05.388320 kubelet[1783]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:22:05.388476 kubelet[1783]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:22:05.388731 kubelet[1783]: I0117 12:22:05.388657 1783 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:22:05.748262 kubelet[1783]: I0117 12:22:05.748211 1783 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:22:05.748262 kubelet[1783]: I0117 12:22:05.748246 1783 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:22:05.748504 kubelet[1783]: I0117 12:22:05.748488 1783 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:22:05.764802 kubelet[1783]: I0117 12:22:05.764760 1783 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:22:05.778528 kubelet[1783]: I0117 12:22:05.778474 1783 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:22:05.778777 kubelet[1783]: I0117 12:22:05.778741 1783 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:22:05.778930 kubelet[1783]: I0117 12:22:05.778900 1783 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:22:05.778930 kubelet[1783]: I0117 12:22:05.778930 1783 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:22:05.779108 kubelet[1783]: I0117 12:22:05.778940 1783 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:22:05.779108 kubelet[1783]: I0117 12:22:05.779061 1783 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:22:05.779173 kubelet[1783]: I0117 12:22:05.779164 1783 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:22:05.779205 kubelet[1783]: I0117 12:22:05.779182 1783 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:22:05.779230 kubelet[1783]: I0117 12:22:05.779214 1783 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:22:05.779260 kubelet[1783]: I0117 12:22:05.779235 1783 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:22:05.779440 kubelet[1783]: E0117 12:22:05.779394 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:05.779701 kubelet[1783]: E0117 12:22:05.779496 1783 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:05.780399 kubelet[1783]: I0117 12:22:05.780358 1783 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:22:05.783227 kubelet[1783]: I0117 12:22:05.783175 1783 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:22:05.783438 kubelet[1783]: W0117 12:22:05.783407 1783 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "10.0.0.151" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 17 12:22:05.783517 kubelet[1783]: E0117 12:22:05.783443 1783 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.151" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 17 12:22:05.784074 kubelet[1783]: W0117 12:22:05.783576 1783 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 17 12:22:05.784074 kubelet[1783]: E0117 12:22:05.783606 1783 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 17 12:22:05.784468 kubelet[1783]: W0117 12:22:05.784448 1783 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:22:05.785401 kubelet[1783]: I0117 12:22:05.785262 1783 server.go:1256] "Started kubelet" Jan 17 12:22:05.785401 kubelet[1783]: I0117 12:22:05.785332 1783 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:22:05.786243 kubelet[1783]: I0117 12:22:05.785898 1783 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:22:05.786734 kubelet[1783]: I0117 12:22:05.786289 1783 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:22:05.786734 kubelet[1783]: I0117 12:22:05.786298 1783 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:22:05.786734 kubelet[1783]: I0117 12:22:05.786459 1783 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:22:05.786734 kubelet[1783]: I0117 12:22:05.786569 1783 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:22:05.788194 kubelet[1783]: I0117 12:22:05.788167 1783 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:22:05.788269 kubelet[1783]: I0117 12:22:05.788250 1783 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:22:05.788999 kubelet[1783]: E0117 12:22:05.788883 1783 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.151\" not found" Jan 17 12:22:05.789662 kubelet[1783]: I0117 12:22:05.789634 1783 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:22:05.790833 kubelet[1783]: I0117 12:22:05.789720 1783 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:22:05.792410 kubelet[1783]: E0117 12:22:05.792361 1783 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:22:05.792410 kubelet[1783]: I0117 12:22:05.792406 1783 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:22:05.798454 kubelet[1783]: W0117 12:22:05.798419 1783 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 17 12:22:05.798454 kubelet[1783]: E0117 12:22:05.798453 1783 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 17 12:22:05.799547 kubelet[1783]: E0117 12:22:05.799524 1783 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.151.181b7a46cce3b99e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.151,UID:10.0.0.151,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.151,},FirstTimestamp:2025-01-17 12:22:05.785233822 +0000 UTC m=+0.439594077,LastTimestamp:2025-01-17 12:22:05.785233822 +0000 UTC m=+0.439594077,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.151,}" Jan 17 12:22:05.799809 kubelet[1783]: E0117 12:22:05.799783 1783 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.151\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 17 12:22:05.857787 kubelet[1783]: E0117 12:22:05.857212 1783 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.151.181b7a46cd502090 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.151,UID:10.0.0.151,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.151,},FirstTimestamp:2025-01-17 12:22:05.792338064 +0000 UTC m=+0.446698319,LastTimestamp:2025-01-17 12:22:05.792338064 +0000 UTC m=+0.446698319,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.151,}" Jan 17 12:22:05.857787 kubelet[1783]: I0117 12:22:05.857506 1783 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:22:05.857787 kubelet[1783]: I0117 12:22:05.857517 1783 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:22:05.857787 kubelet[1783]: I0117 12:22:05.857541 1783 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:22:05.861767 kubelet[1783]: E0117 12:22:05.861722 1783 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.151.181b7a46ce2c8169 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.151,UID:10.0.0.151,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.151 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.151,},FirstTimestamp:2025-01-17 12:22:05.806780777 +0000 UTC m=+0.461141033,LastTimestamp:2025-01-17 12:22:05.806780777 +0000 UTC m=+0.461141033,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.151,}" Jan 17 12:22:05.865635 kubelet[1783]: E0117 12:22:05.865584 1783 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.151.181b7a46ce2ca0f1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.151,UID:10.0.0.151,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.151 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.151,},FirstTimestamp:2025-01-17 12:22:05.806788849 +0000 UTC m=+0.461149226,LastTimestamp:2025-01-17 12:22:05.806788849 +0000 UTC m=+0.461149226,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.151,}" Jan 17 12:22:05.869223 kubelet[1783]: E0117 12:22:05.869187 1783 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.151.181b7a46ce2cb4aa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.151,UID:10.0.0.151,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.151 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.151,},FirstTimestamp:2025-01-17 12:22:05.806793898 +0000 UTC m=+0.461154153,LastTimestamp:2025-01-17 12:22:05.806793898 +0000 UTC m=+0.461154153,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.151,}" Jan 17 12:22:05.890251 kubelet[1783]: I0117 12:22:05.890213 1783 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.151" Jan 17 12:22:05.894203 kubelet[1783]: E0117 12:22:05.894173 1783 kubelet_node_status.go:96] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.151" Jan 17 12:22:05.894309 kubelet[1783]: E0117 12:22:05.894261 1783 event.go:346] "Server rejected event (will not retry!)" err="events \"10.0.0.151.181b7a46ce2c8169\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.151.181b7a46ce2c8169 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.151,UID:10.0.0.151,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.151 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.151,},FirstTimestamp:2025-01-17 12:22:05.806780777 +0000 UTC m=+0.461141033,LastTimestamp:2025-01-17 12:22:05.890160358 +0000 UTC m=+0.544520613,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.151,}" Jan 17 12:22:05.897208 kubelet[1783]: E0117 12:22:05.897183 1783 event.go:346] "Server rejected event (will not retry!)" err="events \"10.0.0.151.181b7a46ce2ca0f1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.151.181b7a46ce2ca0f1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.151,UID:10.0.0.151,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.151 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.151,},FirstTimestamp:2025-01-17 12:22:05.806788849 +0000 UTC m=+0.461149226,LastTimestamp:2025-01-17 12:22:05.890172469 +0000 UTC m=+0.544532724,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.151,}" Jan 17 12:22:05.900609 kubelet[1783]: E0117 12:22:05.900592 1783 event.go:346] "Server rejected event (will not retry!)" err="events \"10.0.0.151.181b7a46ce2cb4aa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.151.181b7a46ce2cb4aa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.151,UID:10.0.0.151,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.151 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.151,},FirstTimestamp:2025-01-17 12:22:05.806793898 +0000 UTC m=+0.461154153,LastTimestamp:2025-01-17 12:22:05.89017515 +0000 UTC m=+0.544535405,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.151,}" Jan 17 12:22:06.007419 kubelet[1783]: E0117 12:22:06.005598 1783 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.151\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Jan 17 12:22:06.095945 kubelet[1783]: I0117 12:22:06.095906 1783 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.151" Jan 17 12:22:06.616563 kubelet[1783]: I0117 12:22:06.616503 1783 policy_none.go:49] "None policy: Start" Jan 17 12:22:06.617890 kubelet[1783]: I0117 12:22:06.617856 1783 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:22:06.617890 kubelet[1783]: I0117 12:22:06.617901 1783 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:22:06.618024 kubelet[1783]: I0117 12:22:06.617938 1783 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.151" Jan 17 12:22:06.620616 kubelet[1783]: I0117 12:22:06.620550 1783 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 17 12:22:06.621172 containerd[1459]: time="2025-01-17T12:22:06.621115624Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:22:06.621665 kubelet[1783]: I0117 12:22:06.621540 1783 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 17 12:22:06.630197 kubelet[1783]: E0117 12:22:06.630128 1783 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.151\" not found" Jan 17 12:22:06.632236 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 12:22:06.647010 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 12:22:06.649766 kubelet[1783]: I0117 12:22:06.649730 1783 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:22:06.650355 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 12:22:06.651310 kubelet[1783]: I0117 12:22:06.651174 1783 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:22:06.651310 kubelet[1783]: I0117 12:22:06.651215 1783 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:22:06.651310 kubelet[1783]: I0117 12:22:06.651232 1783 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:22:06.651453 kubelet[1783]: E0117 12:22:06.651363 1783 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:22:06.665758 kubelet[1783]: I0117 12:22:06.665712 1783 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:22:06.666474 kubelet[1783]: I0117 12:22:06.666121 1783 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:22:06.669576 kubelet[1783]: E0117 12:22:06.669535 1783 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.151\" not found" Jan 17 12:22:06.750303 kubelet[1783]: I0117 12:22:06.750201 1783 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 17 12:22:06.750601 kubelet[1783]: W0117 12:22:06.750539 1783 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.Service ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Jan 17 12:22:06.750601 kubelet[1783]: W0117 12:22:06.750566 1783 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.Node ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Jan 17 12:22:06.750601 kubelet[1783]: W0117 12:22:06.750579 1783 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Jan 17 12:22:06.780273 kubelet[1783]: E0117 12:22:06.780222 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:06.780273 kubelet[1783]: I0117 12:22:06.780238 1783 apiserver.go:52] "Watching apiserver" Jan 17 12:22:06.788794 kubelet[1783]: I0117 12:22:06.788739 1783 topology_manager.go:215] "Topology Admit Handler" podUID="03f21fda-11b1-4a93-8ca1-41502cc623c7" podNamespace="kube-system" podName="cilium-5h79v" Jan 17 12:22:06.788997 kubelet[1783]: I0117 12:22:06.788903 1783 topology_manager.go:215] "Topology Admit Handler" podUID="ffb426e1-d226-4283-9ff3-ae80f71d9c36" podNamespace="kube-system" podName="kube-proxy-4qjrn" Jan 17 12:22:06.795942 systemd[1]: Created slice kubepods-besteffort-podffb426e1_d226_4283_9ff3_ae80f71d9c36.slice - libcontainer container kubepods-besteffort-podffb426e1_d226_4283_9ff3_ae80f71d9c36.slice. Jan 17 12:22:06.808077 systemd[1]: Created slice kubepods-burstable-pod03f21fda_11b1_4a93_8ca1_41502cc623c7.slice - libcontainer container kubepods-burstable-pod03f21fda_11b1_4a93_8ca1_41502cc623c7.slice. Jan 17 12:22:06.859412 sudo[1650]: pam_unix(sudo:session): session closed for user root Jan 17 12:22:06.861223 sshd[1647]: pam_unix(sshd:session): session closed for user core Jan 17 12:22:06.864563 systemd[1]: sshd@8-10.0.0.151:22-10.0.0.1:45714.service: Deactivated successfully. Jan 17 12:22:06.866304 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 12:22:06.867023 systemd-logind[1444]: Session 9 logged out. Waiting for processes to exit. Jan 17 12:22:06.868026 systemd-logind[1444]: Removed session 9. Jan 17 12:22:06.888612 kubelet[1783]: I0117 12:22:06.888574 1783 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:22:06.894602 kubelet[1783]: I0117 12:22:06.894582 1783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pxfj\" (UniqueName: \"kubernetes.io/projected/ffb426e1-d226-4283-9ff3-ae80f71d9c36-kube-api-access-4pxfj\") pod \"kube-proxy-4qjrn\" (UID: \"ffb426e1-d226-4283-9ff3-ae80f71d9c36\") " pod="kube-system/kube-proxy-4qjrn" Jan 17 12:22:06.894662 kubelet[1783]: I0117 12:22:06.894625 1783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/03f21fda-11b1-4a93-8ca1-41502cc623c7-cilium-cgroup\") pod \"cilium-5h79v\" (UID: \"03f21fda-11b1-4a93-8ca1-41502cc623c7\") " pod="kube-system/cilium-5h79v" Jan 17 12:22:06.894662 kubelet[1783]: I0117 12:22:06.894651 1783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/03f21fda-11b1-4a93-8ca1-41502cc623c7-cni-path\") pod \"cilium-5h79v\" (UID: \"03f21fda-11b1-4a93-8ca1-41502cc623c7\") " pod="kube-system/cilium-5h79v" Jan 17 12:22:06.894755 kubelet[1783]: I0117 12:22:06.894726 1783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/03f21fda-11b1-4a93-8ca1-41502cc623c7-cilium-config-path\") pod \"cilium-5h79v\" (UID: \"03f21fda-11b1-4a93-8ca1-41502cc623c7\") " pod="kube-system/cilium-5h79v" Jan 17 12:22:06.894790 kubelet[1783]: I0117 12:22:06.894779 1783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffb426e1-d226-4283-9ff3-ae80f71d9c36-lib-modules\") pod \"kube-proxy-4qjrn\" (UID: \"ffb426e1-d226-4283-9ff3-ae80f71d9c36\") " pod="kube-system/kube-proxy-4qjrn" Jan 17 12:22:06.894850 kubelet[1783]: I0117 12:22:06.894828 1783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/03f21fda-11b1-4a93-8ca1-41502cc623c7-hostproc\") pod \"cilium-5h79v\" (UID: \"03f21fda-11b1-4a93-8ca1-41502cc623c7\") " pod="kube-system/cilium-5h79v" Jan 17 12:22:06.894882 kubelet[1783]: I0117 12:22:06.894864 1783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/03f21fda-11b1-4a93-8ca1-41502cc623c7-host-proc-sys-net\") pod \"cilium-5h79v\" (UID: \"03f21fda-11b1-4a93-8ca1-41502cc623c7\") " pod="kube-system/cilium-5h79v" Jan 17 12:22:06.894912 kubelet[1783]: I0117 12:22:06.894894 1783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/03f21fda-11b1-4a93-8ca1-41502cc623c7-hubble-tls\") pod \"cilium-5h79v\" (UID: \"03f21fda-11b1-4a93-8ca1-41502cc623c7\") " pod="kube-system/cilium-5h79v" Jan 17 12:22:06.894944 kubelet[1783]: I0117 12:22:06.894921 1783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt9sf\" (UniqueName: \"kubernetes.io/projected/03f21fda-11b1-4a93-8ca1-41502cc623c7-kube-api-access-wt9sf\") pod \"cilium-5h79v\" (UID: \"03f21fda-11b1-4a93-8ca1-41502cc623c7\") " pod="kube-system/cilium-5h79v" Jan 17 12:22:06.894982 kubelet[1783]: I0117 12:22:06.894947 1783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ffb426e1-d226-4283-9ff3-ae80f71d9c36-kube-proxy\") pod \"kube-proxy-4qjrn\" (UID: \"ffb426e1-d226-4283-9ff3-ae80f71d9c36\") " pod="kube-system/kube-proxy-4qjrn" Jan 17 12:22:06.895011 kubelet[1783]: I0117 12:22:06.894985 1783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03f21fda-11b1-4a93-8ca1-41502cc623c7-etc-cni-netd\") pod \"cilium-5h79v\" (UID: \"03f21fda-11b1-4a93-8ca1-41502cc623c7\") " pod="kube-system/cilium-5h79v" Jan 17 12:22:06.895034 kubelet[1783]: I0117 12:22:06.895015 1783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/03f21fda-11b1-4a93-8ca1-41502cc623c7-lib-modules\") pod \"cilium-5h79v\" (UID: \"03f21fda-11b1-4a93-8ca1-41502cc623c7\") " pod="kube-system/cilium-5h79v" Jan 17 12:22:06.895061 kubelet[1783]: I0117 12:22:06.895037 1783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/03f21fda-11b1-4a93-8ca1-41502cc623c7-xtables-lock\") pod \"cilium-5h79v\" (UID: \"03f21fda-11b1-4a93-8ca1-41502cc623c7\") " pod="kube-system/cilium-5h79v" Jan 17 12:22:06.895091 kubelet[1783]: I0117 12:22:06.895076 1783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/03f21fda-11b1-4a93-8ca1-41502cc623c7-clustermesh-secrets\") pod \"cilium-5h79v\" (UID: \"03f21fda-11b1-4a93-8ca1-41502cc623c7\") " pod="kube-system/cilium-5h79v" Jan 17 12:22:06.895146 kubelet[1783]: I0117 12:22:06.895115 1783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/03f21fda-11b1-4a93-8ca1-41502cc623c7-cilium-run\") pod \"cilium-5h79v\" (UID: \"03f21fda-11b1-4a93-8ca1-41502cc623c7\") " pod="kube-system/cilium-5h79v" Jan 17 12:22:06.895203 kubelet[1783]: I0117 12:22:06.895186 1783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/03f21fda-11b1-4a93-8ca1-41502cc623c7-bpf-maps\") pod \"cilium-5h79v\" (UID: \"03f21fda-11b1-4a93-8ca1-41502cc623c7\") " pod="kube-system/cilium-5h79v" Jan 17 12:22:06.895241 kubelet[1783]: I0117 12:22:06.895217 1783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/03f21fda-11b1-4a93-8ca1-41502cc623c7-host-proc-sys-kernel\") pod \"cilium-5h79v\" (UID: \"03f21fda-11b1-4a93-8ca1-41502cc623c7\") " pod="kube-system/cilium-5h79v" Jan 17 12:22:06.895277 kubelet[1783]: I0117 12:22:06.895262 1783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ffb426e1-d226-4283-9ff3-ae80f71d9c36-xtables-lock\") pod \"kube-proxy-4qjrn\" (UID: \"ffb426e1-d226-4283-9ff3-ae80f71d9c36\") " pod="kube-system/kube-proxy-4qjrn" Jan 17 12:22:07.104992 kubelet[1783]: E0117 12:22:07.104941 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:22:07.105691 containerd[1459]: time="2025-01-17T12:22:07.105633364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4qjrn,Uid:ffb426e1-d226-4283-9ff3-ae80f71d9c36,Namespace:kube-system,Attempt:0,}" Jan 17 12:22:07.121864 kubelet[1783]: E0117 12:22:07.121775 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:22:07.122246 containerd[1459]: time="2025-01-17T12:22:07.122183202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5h79v,Uid:03f21fda-11b1-4a93-8ca1-41502cc623c7,Namespace:kube-system,Attempt:0,}" Jan 17 12:22:07.781064 kubelet[1783]: E0117 12:22:07.781006 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:08.000622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3908047120.mount: Deactivated successfully. Jan 17 12:22:08.148848 containerd[1459]: time="2025-01-17T12:22:08.148690584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:22:08.169679 containerd[1459]: time="2025-01-17T12:22:08.169614900Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:22:08.193129 containerd[1459]: time="2025-01-17T12:22:08.193013554Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 12:22:08.194692 containerd[1459]: time="2025-01-17T12:22:08.194639087Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:22:08.196875 containerd[1459]: time="2025-01-17T12:22:08.196827600Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:22:08.200481 containerd[1459]: time="2025-01-17T12:22:08.200432066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:22:08.201442 containerd[1459]: time="2025-01-17T12:22:08.201327168Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.079073884s" Jan 17 12:22:08.204215 containerd[1459]: time="2025-01-17T12:22:08.204170076Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.098436297s" Jan 17 12:22:08.387605 containerd[1459]: time="2025-01-17T12:22:08.387476104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:08.387605 containerd[1459]: time="2025-01-17T12:22:08.387549313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:08.387605 containerd[1459]: time="2025-01-17T12:22:08.387565084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:08.387853 containerd[1459]: time="2025-01-17T12:22:08.387674329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:08.394012 containerd[1459]: time="2025-01-17T12:22:08.393782542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:08.394012 containerd[1459]: time="2025-01-17T12:22:08.393850632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:08.394012 containerd[1459]: time="2025-01-17T12:22:08.393865577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:08.394012 containerd[1459]: time="2025-01-17T12:22:08.393955813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:08.564631 systemd[1]: Started cri-containerd-652dd845106c2ebbd299b91005314630d35a53719c73fb9b6e3ae5f0b130d94f.scope - libcontainer container 652dd845106c2ebbd299b91005314630d35a53719c73fb9b6e3ae5f0b130d94f. Jan 17 12:22:08.570146 systemd[1]: Started cri-containerd-938780185718b5e0e5621d0116683028329d7ef3cc985c118f1dc2e94b66d4fe.scope - libcontainer container 938780185718b5e0e5621d0116683028329d7ef3cc985c118f1dc2e94b66d4fe. Jan 17 12:22:08.609302 containerd[1459]: time="2025-01-17T12:22:08.605431337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4qjrn,Uid:ffb426e1-d226-4283-9ff3-ae80f71d9c36,Namespace:kube-system,Attempt:0,} returns sandbox id \"938780185718b5e0e5621d0116683028329d7ef3cc985c118f1dc2e94b66d4fe\"" Jan 17 12:22:08.609302 containerd[1459]: time="2025-01-17T12:22:08.608948161Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\"" Jan 17 12:22:08.609580 kubelet[1783]: E0117 12:22:08.607779 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:22:08.630684 containerd[1459]: time="2025-01-17T12:22:08.630640129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5h79v,Uid:03f21fda-11b1-4a93-8ca1-41502cc623c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"652dd845106c2ebbd299b91005314630d35a53719c73fb9b6e3ae5f0b130d94f\"" Jan 17 12:22:08.631703 kubelet[1783]: E0117 12:22:08.631653 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:22:08.781848 kubelet[1783]: E0117 12:22:08.781771 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:09.770355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3885137888.mount: Deactivated successfully. Jan 17 12:22:09.783007 kubelet[1783]: E0117 12:22:09.782945 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:10.203157 containerd[1459]: time="2025-01-17T12:22:10.203104871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:10.204027 containerd[1459]: time="2025-01-17T12:22:10.203976153Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.13: active requests=0, bytes read=28620941" Jan 17 12:22:10.205354 containerd[1459]: time="2025-01-17T12:22:10.205322788Z" level=info msg="ImageCreate event name:\"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:10.207620 containerd[1459]: time="2025-01-17T12:22:10.207581503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:10.208223 containerd[1459]: time="2025-01-17T12:22:10.208180723Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.13\" with image id \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\", repo tag \"registry.k8s.io/kube-proxy:v1.29.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\", size \"28619960\" in 1.599201736s" Jan 17 12:22:10.208223 containerd[1459]: time="2025-01-17T12:22:10.208215289Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\" returns image reference \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\"" Jan 17 12:22:10.208940 containerd[1459]: time="2025-01-17T12:22:10.208897771Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 17 12:22:10.210239 containerd[1459]: time="2025-01-17T12:22:10.210210956Z" level=info msg="CreateContainer within sandbox \"938780185718b5e0e5621d0116683028329d7ef3cc985c118f1dc2e94b66d4fe\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:22:10.225777 containerd[1459]: time="2025-01-17T12:22:10.225731838Z" level=info msg="CreateContainer within sandbox \"938780185718b5e0e5621d0116683028329d7ef3cc985c118f1dc2e94b66d4fe\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"087b0bec35c2a64d3c0fb35b3e8a55a7f9772a3ab082f6737eb73322bbd71e1c\"" Jan 17 12:22:10.226345 containerd[1459]: time="2025-01-17T12:22:10.226319935Z" level=info msg="StartContainer for \"087b0bec35c2a64d3c0fb35b3e8a55a7f9772a3ab082f6737eb73322bbd71e1c\"" Jan 17 12:22:10.284706 systemd[1]: run-containerd-runc-k8s.io-087b0bec35c2a64d3c0fb35b3e8a55a7f9772a3ab082f6737eb73322bbd71e1c-runc.17c8RT.mount: Deactivated successfully. Jan 17 12:22:10.295580 systemd[1]: Started cri-containerd-087b0bec35c2a64d3c0fb35b3e8a55a7f9772a3ab082f6737eb73322bbd71e1c.scope - libcontainer container 087b0bec35c2a64d3c0fb35b3e8a55a7f9772a3ab082f6737eb73322bbd71e1c. Jan 17 12:22:10.441899 containerd[1459]: time="2025-01-17T12:22:10.441842719Z" level=info msg="StartContainer for \"087b0bec35c2a64d3c0fb35b3e8a55a7f9772a3ab082f6737eb73322bbd71e1c\" returns successfully" Jan 17 12:22:10.662958 kubelet[1783]: E0117 12:22:10.662588 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:22:10.771728 kubelet[1783]: I0117 12:22:10.771670 1783 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-4qjrn" podStartSLOduration=3.171472704 podStartE2EDuration="4.771618678s" podCreationTimestamp="2025-01-17 12:22:06 +0000 UTC" firstStartedPulling="2025-01-17 12:22:08.608540558 +0000 UTC m=+3.262900814" lastFinishedPulling="2025-01-17 12:22:10.208686532 +0000 UTC m=+4.863046788" observedRunningTime="2025-01-17 12:22:10.771530291 +0000 UTC m=+5.425890557" watchObservedRunningTime="2025-01-17 12:22:10.771618678 +0000 UTC m=+5.425978953" Jan 17 12:22:10.783931 kubelet[1783]: E0117 12:22:10.783903 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:11.665157 kubelet[1783]: E0117 12:22:11.665115 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:22:11.784491 kubelet[1783]: E0117 12:22:11.784444 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:12.785277 kubelet[1783]: E0117 12:22:12.785206 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:13.785909 kubelet[1783]: E0117 12:22:13.785826 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:14.786799 kubelet[1783]: E0117 12:22:14.786741 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:15.787363 kubelet[1783]: E0117 12:22:15.787273 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:16.788181 kubelet[1783]: E0117 12:22:16.788117 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:17.789102 kubelet[1783]: E0117 12:22:17.788999 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:18.789406 kubelet[1783]: E0117 12:22:18.789316 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:19.790557 kubelet[1783]: E0117 12:22:19.790494 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:20.791429 kubelet[1783]: E0117 12:22:20.791343 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:21.792403 kubelet[1783]: E0117 12:22:21.792347 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:22.100507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4161309171.mount: Deactivated successfully. Jan 17 12:22:22.793248 kubelet[1783]: E0117 12:22:22.793184 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:23.794045 kubelet[1783]: E0117 12:22:23.793973 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:24.794131 kubelet[1783]: E0117 12:22:24.794090 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:25.673595 containerd[1459]: time="2025-01-17T12:22:25.673513437Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:25.735404 containerd[1459]: time="2025-01-17T12:22:25.735279074Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735375" Jan 17 12:22:25.764969 containerd[1459]: time="2025-01-17T12:22:25.764899355Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:25.766688 containerd[1459]: time="2025-01-17T12:22:25.766649106Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 15.557722144s" Jan 17 12:22:25.766688 containerd[1459]: time="2025-01-17T12:22:25.766685485Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 17 12:22:25.768695 containerd[1459]: time="2025-01-17T12:22:25.768662426Z" level=info msg="CreateContainer within sandbox \"652dd845106c2ebbd299b91005314630d35a53719c73fb9b6e3ae5f0b130d94f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 12:22:25.779860 kubelet[1783]: E0117 12:22:25.779810 1783 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:25.794469 kubelet[1783]: E0117 12:22:25.794403 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:26.233620 containerd[1459]: time="2025-01-17T12:22:26.233569907Z" level=info msg="CreateContainer within sandbox \"652dd845106c2ebbd299b91005314630d35a53719c73fb9b6e3ae5f0b130d94f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7af3a92e694df86c0a4871d353407141f0c205781654451f3a1210c7536c54f3\"" Jan 17 12:22:26.234314 containerd[1459]: time="2025-01-17T12:22:26.234061653Z" level=info msg="StartContainer for \"7af3a92e694df86c0a4871d353407141f0c205781654451f3a1210c7536c54f3\"" Jan 17 12:22:26.266498 systemd[1]: Started cri-containerd-7af3a92e694df86c0a4871d353407141f0c205781654451f3a1210c7536c54f3.scope - libcontainer container 7af3a92e694df86c0a4871d353407141f0c205781654451f3a1210c7536c54f3. Jan 17 12:22:26.302222 systemd[1]: cri-containerd-7af3a92e694df86c0a4871d353407141f0c205781654451f3a1210c7536c54f3.scope: Deactivated successfully. Jan 17 12:22:26.384089 containerd[1459]: time="2025-01-17T12:22:26.384011523Z" level=info msg="StartContainer for \"7af3a92e694df86c0a4871d353407141f0c205781654451f3a1210c7536c54f3\" returns successfully" Jan 17 12:22:26.687841 kubelet[1783]: E0117 12:22:26.687697 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:22:26.794800 kubelet[1783]: E0117 12:22:26.794751 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:26.986674 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7af3a92e694df86c0a4871d353407141f0c205781654451f3a1210c7536c54f3-rootfs.mount: Deactivated successfully. Jan 17 12:22:27.680351 containerd[1459]: time="2025-01-17T12:22:27.680270226Z" level=info msg="shim disconnected" id=7af3a92e694df86c0a4871d353407141f0c205781654451f3a1210c7536c54f3 namespace=k8s.io Jan 17 12:22:27.680351 containerd[1459]: time="2025-01-17T12:22:27.680347111Z" level=warning msg="cleaning up after shim disconnected" id=7af3a92e694df86c0a4871d353407141f0c205781654451f3a1210c7536c54f3 namespace=k8s.io Jan 17 12:22:27.680351 containerd[1459]: time="2025-01-17T12:22:27.680359841Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:22:27.689255 kubelet[1783]: E0117 12:22:27.689178 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:22:27.795059 kubelet[1783]: E0117 12:22:27.794977 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:28.691776 kubelet[1783]: E0117 12:22:28.691745 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:22:28.693638 containerd[1459]: time="2025-01-17T12:22:28.693600329Z" level=info msg="CreateContainer within sandbox \"652dd845106c2ebbd299b91005314630d35a53719c73fb9b6e3ae5f0b130d94f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 12:22:28.795866 kubelet[1783]: E0117 12:22:28.795821 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:29.005946 containerd[1459]: time="2025-01-17T12:22:29.005777720Z" level=info msg="CreateContainer within sandbox \"652dd845106c2ebbd299b91005314630d35a53719c73fb9b6e3ae5f0b130d94f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"48c67b0d2f6455f31421e24ccfd43fc9c54d5780caff18a860a8118affac1087\"" Jan 17 12:22:29.006863 containerd[1459]: time="2025-01-17T12:22:29.006763142Z" level=info msg="StartContainer for \"48c67b0d2f6455f31421e24ccfd43fc9c54d5780caff18a860a8118affac1087\"" Jan 17 12:22:29.036646 systemd[1]: Started cri-containerd-48c67b0d2f6455f31421e24ccfd43fc9c54d5780caff18a860a8118affac1087.scope - libcontainer container 48c67b0d2f6455f31421e24ccfd43fc9c54d5780caff18a860a8118affac1087. Jan 17 12:22:29.064488 containerd[1459]: time="2025-01-17T12:22:29.064413970Z" level=info msg="StartContainer for \"48c67b0d2f6455f31421e24ccfd43fc9c54d5780caff18a860a8118affac1087\" returns successfully" Jan 17 12:22:29.075178 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:22:29.075429 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:22:29.075498 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:22:29.081001 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:22:29.081671 systemd[1]: cri-containerd-48c67b0d2f6455f31421e24ccfd43fc9c54d5780caff18a860a8118affac1087.scope: Deactivated successfully. Jan 17 12:22:29.099937 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-48c67b0d2f6455f31421e24ccfd43fc9c54d5780caff18a860a8118affac1087-rootfs.mount: Deactivated successfully. Jan 17 12:22:29.101159 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:22:29.333199 containerd[1459]: time="2025-01-17T12:22:29.333034012Z" level=info msg="shim disconnected" id=48c67b0d2f6455f31421e24ccfd43fc9c54d5780caff18a860a8118affac1087 namespace=k8s.io Jan 17 12:22:29.333199 containerd[1459]: time="2025-01-17T12:22:29.333105721Z" level=warning msg="cleaning up after shim disconnected" id=48c67b0d2f6455f31421e24ccfd43fc9c54d5780caff18a860a8118affac1087 namespace=k8s.io Jan 17 12:22:29.333199 containerd[1459]: time="2025-01-17T12:22:29.333117979Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:22:29.346600 containerd[1459]: time="2025-01-17T12:22:29.346545278Z" level=warning msg="cleanup warnings time=\"2025-01-17T12:22:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 12:22:29.694951 kubelet[1783]: E0117 12:22:29.694911 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:22:29.696498 containerd[1459]: time="2025-01-17T12:22:29.696459892Z" level=info msg="CreateContainer within sandbox \"652dd845106c2ebbd299b91005314630d35a53719c73fb9b6e3ae5f0b130d94f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 12:22:29.796036 kubelet[1783]: E0117 12:22:29.796007 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:29.810694 containerd[1459]: time="2025-01-17T12:22:29.810607742Z" level=info msg="CreateContainer within sandbox \"652dd845106c2ebbd299b91005314630d35a53719c73fb9b6e3ae5f0b130d94f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c343154bebd1130836d569bc60f569868200c0ac64682207e5a5eaad4b52211f\"" Jan 17 12:22:29.811301 containerd[1459]: time="2025-01-17T12:22:29.811230473Z" level=info msg="StartContainer for \"c343154bebd1130836d569bc60f569868200c0ac64682207e5a5eaad4b52211f\"" Jan 17 12:22:29.844559 systemd[1]: Started cri-containerd-c343154bebd1130836d569bc60f569868200c0ac64682207e5a5eaad4b52211f.scope - libcontainer container c343154bebd1130836d569bc60f569868200c0ac64682207e5a5eaad4b52211f. Jan 17 12:22:29.875967 systemd[1]: cri-containerd-c343154bebd1130836d569bc60f569868200c0ac64682207e5a5eaad4b52211f.scope: Deactivated successfully. Jan 17 12:22:29.883328 containerd[1459]: time="2025-01-17T12:22:29.883276350Z" level=info msg="StartContainer for \"c343154bebd1130836d569bc60f569868200c0ac64682207e5a5eaad4b52211f\" returns successfully" Jan 17 12:22:29.915112 containerd[1459]: time="2025-01-17T12:22:29.915035591Z" level=info msg="shim disconnected" id=c343154bebd1130836d569bc60f569868200c0ac64682207e5a5eaad4b52211f namespace=k8s.io Jan 17 12:22:29.915112 containerd[1459]: time="2025-01-17T12:22:29.915105124Z" level=warning msg="cleaning up after shim disconnected" id=c343154bebd1130836d569bc60f569868200c0ac64682207e5a5eaad4b52211f namespace=k8s.io Jan 17 12:22:29.915112 containerd[1459]: time="2025-01-17T12:22:29.915122014Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:22:30.699142 kubelet[1783]: E0117 12:22:30.699095 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:22:30.701333 containerd[1459]: time="2025-01-17T12:22:30.701289052Z" level=info msg="CreateContainer within sandbox \"652dd845106c2ebbd299b91005314630d35a53719c73fb9b6e3ae5f0b130d94f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 12:22:30.796335 kubelet[1783]: E0117 12:22:30.796267 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:30.849854 containerd[1459]: time="2025-01-17T12:22:30.849779883Z" level=info msg="CreateContainer within sandbox \"652dd845106c2ebbd299b91005314630d35a53719c73fb9b6e3ae5f0b130d94f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b809f2b4670df854e76fbcd3d074f765e57f1e1cf9372879e44fa1e715a74e02\"" Jan 17 12:22:30.850643 containerd[1459]: time="2025-01-17T12:22:30.850549901Z" level=info msg="StartContainer for \"b809f2b4670df854e76fbcd3d074f765e57f1e1cf9372879e44fa1e715a74e02\"" Jan 17 12:22:30.886616 systemd[1]: Started cri-containerd-b809f2b4670df854e76fbcd3d074f765e57f1e1cf9372879e44fa1e715a74e02.scope - libcontainer container b809f2b4670df854e76fbcd3d074f765e57f1e1cf9372879e44fa1e715a74e02. Jan 17 12:22:30.917248 systemd[1]: cri-containerd-b809f2b4670df854e76fbcd3d074f765e57f1e1cf9372879e44fa1e715a74e02.scope: Deactivated successfully. Jan 17 12:22:30.920878 containerd[1459]: time="2025-01-17T12:22:30.920835677Z" level=info msg="StartContainer for \"b809f2b4670df854e76fbcd3d074f765e57f1e1cf9372879e44fa1e715a74e02\" returns successfully" Jan 17 12:22:30.947333 containerd[1459]: time="2025-01-17T12:22:30.947245898Z" level=info msg="shim disconnected" id=b809f2b4670df854e76fbcd3d074f765e57f1e1cf9372879e44fa1e715a74e02 namespace=k8s.io Jan 17 12:22:30.947333 containerd[1459]: time="2025-01-17T12:22:30.947332389Z" level=warning msg="cleaning up after shim disconnected" id=b809f2b4670df854e76fbcd3d074f765e57f1e1cf9372879e44fa1e715a74e02 namespace=k8s.io Jan 17 12:22:30.947333 containerd[1459]: time="2025-01-17T12:22:30.947343474Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:22:30.984190 systemd[1]: run-containerd-runc-k8s.io-b809f2b4670df854e76fbcd3d074f765e57f1e1cf9372879e44fa1e715a74e02-runc.MIUgd4.mount: Deactivated successfully. Jan 17 12:22:30.984311 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b809f2b4670df854e76fbcd3d074f765e57f1e1cf9372879e44fa1e715a74e02-rootfs.mount: Deactivated successfully. Jan 17 12:22:31.703728 kubelet[1783]: E0117 12:22:31.703699 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:22:31.705896 containerd[1459]: time="2025-01-17T12:22:31.705847643Z" level=info msg="CreateContainer within sandbox \"652dd845106c2ebbd299b91005314630d35a53719c73fb9b6e3ae5f0b130d94f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 12:22:31.722526 containerd[1459]: time="2025-01-17T12:22:31.722488992Z" level=info msg="CreateContainer within sandbox \"652dd845106c2ebbd299b91005314630d35a53719c73fb9b6e3ae5f0b130d94f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dd508ea9cf99ad267d6ccb94e39fa5b98f9499c1d1290d95ed7e61391dcb1132\"" Jan 17 12:22:31.723005 containerd[1459]: time="2025-01-17T12:22:31.722966519Z" level=info msg="StartContainer for \"dd508ea9cf99ad267d6ccb94e39fa5b98f9499c1d1290d95ed7e61391dcb1132\"" Jan 17 12:22:31.753521 systemd[1]: Started cri-containerd-dd508ea9cf99ad267d6ccb94e39fa5b98f9499c1d1290d95ed7e61391dcb1132.scope - libcontainer container dd508ea9cf99ad267d6ccb94e39fa5b98f9499c1d1290d95ed7e61391dcb1132. Jan 17 12:22:31.782231 containerd[1459]: time="2025-01-17T12:22:31.782164562Z" level=info msg="StartContainer for \"dd508ea9cf99ad267d6ccb94e39fa5b98f9499c1d1290d95ed7e61391dcb1132\" returns successfully" Jan 17 12:22:31.797249 kubelet[1783]: E0117 12:22:31.797197 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:31.937314 kubelet[1783]: I0117 12:22:31.937282 1783 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 17 12:22:32.289406 kernel: Initializing XFRM netlink socket Jan 17 12:22:32.708501 kubelet[1783]: E0117 12:22:32.708466 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:22:32.798055 kubelet[1783]: E0117 12:22:32.797994 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:33.556016 kubelet[1783]: I0117 12:22:33.555943 1783 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-5h79v" podStartSLOduration=10.421302853 podStartE2EDuration="27.555875389s" podCreationTimestamp="2025-01-17 12:22:06 +0000 UTC" firstStartedPulling="2025-01-17 12:22:08.632398934 +0000 UTC m=+3.286759189" lastFinishedPulling="2025-01-17 12:22:25.76697147 +0000 UTC m=+20.421331725" observedRunningTime="2025-01-17 12:22:32.723308832 +0000 UTC m=+27.377669107" watchObservedRunningTime="2025-01-17 12:22:33.555875389 +0000 UTC m=+28.210235645" Jan 17 12:22:33.556287 kubelet[1783]: I0117 12:22:33.556263 1783 topology_manager.go:215] "Topology Admit Handler" podUID="8dee2fc2-5ba4-4e1b-a6e9-b4d1720465f5" podNamespace="default" podName="nginx-deployment-6d5f899847-kdw9w" Jan 17 12:22:33.562995 systemd[1]: Created slice kubepods-besteffort-pod8dee2fc2_5ba4_4e1b_a6e9_b4d1720465f5.slice - libcontainer container kubepods-besteffort-pod8dee2fc2_5ba4_4e1b_a6e9_b4d1720465f5.slice. Jan 17 12:22:33.595249 systemd-networkd[1391]: cilium_host: Link UP Jan 17 12:22:33.595476 systemd-networkd[1391]: cilium_net: Link UP Jan 17 12:22:33.595679 systemd-networkd[1391]: cilium_net: Gained carrier Jan 17 12:22:33.595913 systemd-networkd[1391]: cilium_host: Gained carrier Jan 17 12:22:33.702426 systemd-networkd[1391]: cilium_vxlan: Link UP Jan 17 12:22:33.702436 systemd-networkd[1391]: cilium_vxlan: Gained carrier Jan 17 12:22:33.711354 kubelet[1783]: E0117 12:22:33.711321 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:22:33.735928 kubelet[1783]: I0117 12:22:33.735857 1783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqvds\" (UniqueName: \"kubernetes.io/projected/8dee2fc2-5ba4-4e1b-a6e9-b4d1720465f5-kube-api-access-dqvds\") pod \"nginx-deployment-6d5f899847-kdw9w\" (UID: \"8dee2fc2-5ba4-4e1b-a6e9-b4d1720465f5\") " pod="default/nginx-deployment-6d5f899847-kdw9w" Jan 17 12:22:33.798273 kubelet[1783]: E0117 12:22:33.798217 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:33.867169 containerd[1459]: time="2025-01-17T12:22:33.866954116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-kdw9w,Uid:8dee2fc2-5ba4-4e1b-a6e9-b4d1720465f5,Namespace:default,Attempt:0,}" Jan 17 12:22:33.923439 kernel: NET: Registered PF_ALG protocol family Jan 17 12:22:34.415640 systemd-networkd[1391]: cilium_net: Gained IPv6LL Jan 17 12:22:34.587267 systemd-networkd[1391]: lxc_health: Link UP Jan 17 12:22:34.599187 systemd-networkd[1391]: lxc_health: Gained carrier Jan 17 12:22:34.609191 systemd-networkd[1391]: cilium_host: Gained IPv6LL Jan 17 12:22:34.799359 kubelet[1783]: E0117 12:22:34.799308 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:34.902794 systemd-networkd[1391]: lxc54e383c50b8d: Link UP Jan 17 12:22:34.910505 kernel: eth0: renamed from tmp65818 Jan 17 12:22:34.920685 systemd-networkd[1391]: lxc54e383c50b8d: Gained carrier Jan 17 12:22:35.176181 kubelet[1783]: E0117 12:22:35.175897 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:22:35.439618 systemd-networkd[1391]: cilium_vxlan: Gained IPv6LL Jan 17 12:22:35.715026 kubelet[1783]: E0117 12:22:35.714900 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:22:35.800099 kubelet[1783]: E0117 12:22:35.800049 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:35.823529 systemd-networkd[1391]: lxc_health: Gained IPv6LL Jan 17 12:22:36.591751 systemd-networkd[1391]: lxc54e383c50b8d: Gained IPv6LL Jan 17 12:22:36.716357 kubelet[1783]: E0117 12:22:36.716320 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:22:36.801197 kubelet[1783]: E0117 12:22:36.801117 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:36.906609 update_engine[1453]: I20250117 12:22:36.906440 1453 update_attempter.cc:509] Updating boot flags... Jan 17 12:22:36.938372 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2486) Jan 17 12:22:36.981460 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2486) Jan 17 12:22:37.801487 kubelet[1783]: E0117 12:22:37.801435 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:38.479524 containerd[1459]: time="2025-01-17T12:22:38.479418036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:38.479524 containerd[1459]: time="2025-01-17T12:22:38.479480472Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:38.479524 containerd[1459]: time="2025-01-17T12:22:38.479504454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:38.480023 containerd[1459]: time="2025-01-17T12:22:38.479582063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:38.501524 systemd[1]: Started cri-containerd-65818f39c4a57f2590925965a8be403bb50974decff76cf6ba99c70e134b33b0.scope - libcontainer container 65818f39c4a57f2590925965a8be403bb50974decff76cf6ba99c70e134b33b0. Jan 17 12:22:38.513343 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:22:38.538085 containerd[1459]: time="2025-01-17T12:22:38.538042188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-kdw9w,Uid:8dee2fc2-5ba4-4e1b-a6e9-b4d1720465f5,Namespace:default,Attempt:0,} returns sandbox id \"65818f39c4a57f2590925965a8be403bb50974decff76cf6ba99c70e134b33b0\"" Jan 17 12:22:38.540044 containerd[1459]: time="2025-01-17T12:22:38.540009946Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 17 12:22:38.802729 kubelet[1783]: E0117 12:22:38.802590 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:39.803292 kubelet[1783]: E0117 12:22:39.803232 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:40.803674 kubelet[1783]: E0117 12:22:40.803617 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:41.805021 kubelet[1783]: E0117 12:22:41.803831 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:41.812898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2031225523.mount: Deactivated successfully. Jan 17 12:22:42.804566 kubelet[1783]: E0117 12:22:42.804394 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:43.216230 containerd[1459]: time="2025-01-17T12:22:43.216174463Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:43.217432 containerd[1459]: time="2025-01-17T12:22:43.217352435Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036018" Jan 17 12:22:43.218648 containerd[1459]: time="2025-01-17T12:22:43.218596135Z" level=info msg="ImageCreate event name:\"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:43.221234 containerd[1459]: time="2025-01-17T12:22:43.221181353Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:43.222433 containerd[1459]: time="2025-01-17T12:22:43.222394460Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 4.682319393s" Jan 17 12:22:43.222487 containerd[1459]: time="2025-01-17T12:22:43.222436668Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 17 12:22:43.224368 containerd[1459]: time="2025-01-17T12:22:43.224330890Z" level=info msg="CreateContainer within sandbox \"65818f39c4a57f2590925965a8be403bb50974decff76cf6ba99c70e134b33b0\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 17 12:22:43.236313 containerd[1459]: time="2025-01-17T12:22:43.236261701Z" level=info msg="CreateContainer within sandbox \"65818f39c4a57f2590925965a8be403bb50974decff76cf6ba99c70e134b33b0\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"2f2c59afe7221860060b37405db50501d9c78fc997cead3cbbb5cf7f39529909\"" Jan 17 12:22:43.236801 containerd[1459]: time="2025-01-17T12:22:43.236688771Z" level=info msg="StartContainer for \"2f2c59afe7221860060b37405db50501d9c78fc997cead3cbbb5cf7f39529909\"" Jan 17 12:22:43.236840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2758687129.mount: Deactivated successfully. Jan 17 12:22:43.278635 systemd[1]: Started cri-containerd-2f2c59afe7221860060b37405db50501d9c78fc997cead3cbbb5cf7f39529909.scope - libcontainer container 2f2c59afe7221860060b37405db50501d9c78fc997cead3cbbb5cf7f39529909. Jan 17 12:22:43.308170 containerd[1459]: time="2025-01-17T12:22:43.308104278Z" level=info msg="StartContainer for \"2f2c59afe7221860060b37405db50501d9c78fc997cead3cbbb5cf7f39529909\" returns successfully" Jan 17 12:22:43.766566 kubelet[1783]: I0117 12:22:43.766532 1783 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-kdw9w" podStartSLOduration=6.083011548 podStartE2EDuration="10.766488908s" podCreationTimestamp="2025-01-17 12:22:33 +0000 UTC" firstStartedPulling="2025-01-17 12:22:38.539300358 +0000 UTC m=+33.193660613" lastFinishedPulling="2025-01-17 12:22:43.222777718 +0000 UTC m=+37.877137973" observedRunningTime="2025-01-17 12:22:43.766236297 +0000 UTC m=+38.420596542" watchObservedRunningTime="2025-01-17 12:22:43.766488908 +0000 UTC m=+38.420849163" Jan 17 12:22:43.804594 kubelet[1783]: E0117 12:22:43.804524 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:44.805183 kubelet[1783]: E0117 12:22:44.805116 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:45.780334 kubelet[1783]: E0117 12:22:45.780253 1783 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:45.806161 kubelet[1783]: E0117 12:22:45.806003 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:46.163157 kubelet[1783]: I0117 12:22:46.163014 1783 topology_manager.go:215] "Topology Admit Handler" podUID="6b6336ae-1401-4896-bda5-91eeabbc1dca" podNamespace="default" podName="nfs-server-provisioner-0" Jan 17 12:22:46.169835 systemd[1]: Created slice kubepods-besteffort-pod6b6336ae_1401_4896_bda5_91eeabbc1dca.slice - libcontainer container kubepods-besteffort-pod6b6336ae_1401_4896_bda5_91eeabbc1dca.slice. Jan 17 12:22:46.305716 kubelet[1783]: I0117 12:22:46.305650 1783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/6b6336ae-1401-4896-bda5-91eeabbc1dca-data\") pod \"nfs-server-provisioner-0\" (UID: \"6b6336ae-1401-4896-bda5-91eeabbc1dca\") " pod="default/nfs-server-provisioner-0" Jan 17 12:22:46.305716 kubelet[1783]: I0117 12:22:46.305722 1783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdpwm\" (UniqueName: \"kubernetes.io/projected/6b6336ae-1401-4896-bda5-91eeabbc1dca-kube-api-access-sdpwm\") pod \"nfs-server-provisioner-0\" (UID: \"6b6336ae-1401-4896-bda5-91eeabbc1dca\") " pod="default/nfs-server-provisioner-0" Jan 17 12:22:46.473710 containerd[1459]: time="2025-01-17T12:22:46.473654769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:6b6336ae-1401-4896-bda5-91eeabbc1dca,Namespace:default,Attempt:0,}" Jan 17 12:22:46.527460 systemd-networkd[1391]: lxc8054bb2bc1e2: Link UP Jan 17 12:22:46.533403 kernel: eth0: renamed from tmp45aea Jan 17 12:22:46.561460 systemd-networkd[1391]: lxc8054bb2bc1e2: Gained carrier Jan 17 12:22:46.806398 kubelet[1783]: E0117 12:22:46.806195 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:46.867367 containerd[1459]: time="2025-01-17T12:22:46.867228038Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:22:46.867367 containerd[1459]: time="2025-01-17T12:22:46.867319207Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:22:46.867367 containerd[1459]: time="2025-01-17T12:22:46.867338307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:46.867764 containerd[1459]: time="2025-01-17T12:22:46.867454729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:22:46.893535 systemd[1]: Started cri-containerd-45aea06bf7f661d90a6f9e5444f7b3f7d11018940903eaa4b0bf02b3c638f4ad.scope - libcontainer container 45aea06bf7f661d90a6f9e5444f7b3f7d11018940903eaa4b0bf02b3c638f4ad. Jan 17 12:22:46.916499 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:22:46.997568 containerd[1459]: time="2025-01-17T12:22:46.997517667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:6b6336ae-1401-4896-bda5-91eeabbc1dca,Namespace:default,Attempt:0,} returns sandbox id \"45aea06bf7f661d90a6f9e5444f7b3f7d11018940903eaa4b0bf02b3c638f4ad\"" Jan 17 12:22:46.999708 containerd[1459]: time="2025-01-17T12:22:46.999649308Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 17 12:22:47.791595 systemd-networkd[1391]: lxc8054bb2bc1e2: Gained IPv6LL Jan 17 12:22:47.807307 kubelet[1783]: E0117 12:22:47.807264 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:48.807503 kubelet[1783]: E0117 12:22:48.807432 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:49.808451 kubelet[1783]: E0117 12:22:49.808392 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:50.054941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4051109522.mount: Deactivated successfully. Jan 17 12:22:50.808720 kubelet[1783]: E0117 12:22:50.808669 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:51.809126 kubelet[1783]: E0117 12:22:51.809062 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:52.809817 kubelet[1783]: E0117 12:22:52.809757 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:53.275526 containerd[1459]: time="2025-01-17T12:22:53.275452889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:53.277136 containerd[1459]: time="2025-01-17T12:22:53.277085634Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 17 12:22:53.278627 containerd[1459]: time="2025-01-17T12:22:53.278579045Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:53.284020 containerd[1459]: time="2025-01-17T12:22:53.283975889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:22:53.284972 containerd[1459]: time="2025-01-17T12:22:53.284933975Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.285220313s" Jan 17 12:22:53.285032 containerd[1459]: time="2025-01-17T12:22:53.284973615Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 17 12:22:53.286844 containerd[1459]: time="2025-01-17T12:22:53.286816987Z" level=info msg="CreateContainer within sandbox \"45aea06bf7f661d90a6f9e5444f7b3f7d11018940903eaa4b0bf02b3c638f4ad\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 17 12:22:53.301058 containerd[1459]: time="2025-01-17T12:22:53.301026831Z" level=info msg="CreateContainer within sandbox \"45aea06bf7f661d90a6f9e5444f7b3f7d11018940903eaa4b0bf02b3c638f4ad\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"5deb8961d3641690d3824509652cbfa4306fb4d6ecfa58f7698ed27e938be5af\"" Jan 17 12:22:53.301491 containerd[1459]: time="2025-01-17T12:22:53.301466904Z" level=info msg="StartContainer for \"5deb8961d3641690d3824509652cbfa4306fb4d6ecfa58f7698ed27e938be5af\"" Jan 17 12:22:53.368503 systemd[1]: Started cri-containerd-5deb8961d3641690d3824509652cbfa4306fb4d6ecfa58f7698ed27e938be5af.scope - libcontainer container 5deb8961d3641690d3824509652cbfa4306fb4d6ecfa58f7698ed27e938be5af. Jan 17 12:22:53.402805 containerd[1459]: time="2025-01-17T12:22:53.402741712Z" level=info msg="StartContainer for \"5deb8961d3641690d3824509652cbfa4306fb4d6ecfa58f7698ed27e938be5af\" returns successfully" Jan 17 12:22:53.760914 kubelet[1783]: I0117 12:22:53.760860 1783 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.474696474 podStartE2EDuration="7.760807321s" podCreationTimestamp="2025-01-17 12:22:46 +0000 UTC" firstStartedPulling="2025-01-17 12:22:46.999141343 +0000 UTC m=+41.653501598" lastFinishedPulling="2025-01-17 12:22:53.28525219 +0000 UTC m=+47.939612445" observedRunningTime="2025-01-17 12:22:53.760677828 +0000 UTC m=+48.415038083" watchObservedRunningTime="2025-01-17 12:22:53.760807321 +0000 UTC m=+48.415167587" Jan 17 12:22:53.810088 kubelet[1783]: E0117 12:22:53.810032 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:54.811195 kubelet[1783]: E0117 12:22:54.811135 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:55.812071 kubelet[1783]: E0117 12:22:55.812029 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:56.812932 kubelet[1783]: E0117 12:22:56.812876 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:57.813549 kubelet[1783]: E0117 12:22:57.813475 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:58.814627 kubelet[1783]: E0117 12:22:58.814581 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:22:59.815533 kubelet[1783]: E0117 12:22:59.815455 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:00.816466 kubelet[1783]: E0117 12:23:00.816409 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:01.817563 kubelet[1783]: E0117 12:23:01.817513 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:02.818512 kubelet[1783]: E0117 12:23:02.818453 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:03.018464 kubelet[1783]: I0117 12:23:03.018412 1783 topology_manager.go:215] "Topology Admit Handler" podUID="77c12986-6116-4cba-864e-3b4c4e74515a" podNamespace="default" podName="test-pod-1" Jan 17 12:23:03.024018 systemd[1]: Created slice kubepods-besteffort-pod77c12986_6116_4cba_864e_3b4c4e74515a.slice - libcontainer container kubepods-besteffort-pod77c12986_6116_4cba_864e_3b4c4e74515a.slice. Jan 17 12:23:03.218571 kubelet[1783]: I0117 12:23:03.218512 1783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ac433274-05d1-457f-8080-4da7827e2075\" (UniqueName: \"kubernetes.io/nfs/77c12986-6116-4cba-864e-3b4c4e74515a-pvc-ac433274-05d1-457f-8080-4da7827e2075\") pod \"test-pod-1\" (UID: \"77c12986-6116-4cba-864e-3b4c4e74515a\") " pod="default/test-pod-1" Jan 17 12:23:03.218571 kubelet[1783]: I0117 12:23:03.218574 1783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzswx\" (UniqueName: \"kubernetes.io/projected/77c12986-6116-4cba-864e-3b4c4e74515a-kube-api-access-bzswx\") pod \"test-pod-1\" (UID: \"77c12986-6116-4cba-864e-3b4c4e74515a\") " pod="default/test-pod-1" Jan 17 12:23:03.351401 kernel: FS-Cache: Loaded Jan 17 12:23:03.423561 kernel: RPC: Registered named UNIX socket transport module. Jan 17 12:23:03.423672 kernel: RPC: Registered udp transport module. Jan 17 12:23:03.423692 kernel: RPC: Registered tcp transport module. Jan 17 12:23:03.423707 kernel: RPC: Registered tcp-with-tls transport module. Jan 17 12:23:03.424920 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 17 12:23:03.733819 kernel: NFS: Registering the id_resolver key type Jan 17 12:23:03.733948 kernel: Key type id_resolver registered Jan 17 12:23:03.733986 kernel: Key type id_legacy registered Jan 17 12:23:03.765742 nfsidmap[3189]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 17 12:23:03.786924 nfsidmap[3192]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 17 12:23:03.819439 kubelet[1783]: E0117 12:23:03.819355 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:03.927003 containerd[1459]: time="2025-01-17T12:23:03.926912481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:77c12986-6116-4cba-864e-3b4c4e74515a,Namespace:default,Attempt:0,}" Jan 17 12:23:03.993673 systemd-networkd[1391]: lxc54da6bd0b244: Link UP Jan 17 12:23:04.005434 kernel: eth0: renamed from tmp1361a Jan 17 12:23:04.010176 systemd-networkd[1391]: lxc54da6bd0b244: Gained carrier Jan 17 12:23:04.206823 containerd[1459]: time="2025-01-17T12:23:04.203616228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:23:04.206823 containerd[1459]: time="2025-01-17T12:23:04.206788900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:23:04.206823 containerd[1459]: time="2025-01-17T12:23:04.206802157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:04.206983 containerd[1459]: time="2025-01-17T12:23:04.206882827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:04.226528 systemd[1]: Started cri-containerd-1361ad34dcdc2e63f2fe65937addf4bffaf6f2fead1ce3841732902c6c5b97f6.scope - libcontainer container 1361ad34dcdc2e63f2fe65937addf4bffaf6f2fead1ce3841732902c6c5b97f6. Jan 17 12:23:04.237971 systemd-resolved[1329]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:23:04.262864 containerd[1459]: time="2025-01-17T12:23:04.262754510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:77c12986-6116-4cba-864e-3b4c4e74515a,Namespace:default,Attempt:0,} returns sandbox id \"1361ad34dcdc2e63f2fe65937addf4bffaf6f2fead1ce3841732902c6c5b97f6\"" Jan 17 12:23:04.264235 containerd[1459]: time="2025-01-17T12:23:04.264216805Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 17 12:23:04.631711 containerd[1459]: time="2025-01-17T12:23:04.631564453Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:04.632527 containerd[1459]: time="2025-01-17T12:23:04.632453439Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 17 12:23:04.635055 containerd[1459]: time="2025-01-17T12:23:04.635002583Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 370.689348ms" Jan 17 12:23:04.635055 containerd[1459]: time="2025-01-17T12:23:04.635042242Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 17 12:23:04.637004 containerd[1459]: time="2025-01-17T12:23:04.636959390Z" level=info msg="CreateContainer within sandbox \"1361ad34dcdc2e63f2fe65937addf4bffaf6f2fead1ce3841732902c6c5b97f6\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 17 12:23:04.652336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1811482457.mount: Deactivated successfully. Jan 17 12:23:04.654929 containerd[1459]: time="2025-01-17T12:23:04.654883618Z" level=info msg="CreateContainer within sandbox \"1361ad34dcdc2e63f2fe65937addf4bffaf6f2fead1ce3841732902c6c5b97f6\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"c9403143a59f14946947e5ffef8fe5a0230c10e1838971afb6ff1499639c93e0\"" Jan 17 12:23:04.655552 containerd[1459]: time="2025-01-17T12:23:04.655481927Z" level=info msg="StartContainer for \"c9403143a59f14946947e5ffef8fe5a0230c10e1838971afb6ff1499639c93e0\"" Jan 17 12:23:04.686512 systemd[1]: Started cri-containerd-c9403143a59f14946947e5ffef8fe5a0230c10e1838971afb6ff1499639c93e0.scope - libcontainer container c9403143a59f14946947e5ffef8fe5a0230c10e1838971afb6ff1499639c93e0. Jan 17 12:23:04.716406 containerd[1459]: time="2025-01-17T12:23:04.713962382Z" level=info msg="StartContainer for \"c9403143a59f14946947e5ffef8fe5a0230c10e1838971afb6ff1499639c93e0\" returns successfully" Jan 17 12:23:04.779115 kubelet[1783]: I0117 12:23:04.779068 1783 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=18.407706014 podStartE2EDuration="18.779017077s" podCreationTimestamp="2025-01-17 12:22:46 +0000 UTC" firstStartedPulling="2025-01-17 12:23:04.263987368 +0000 UTC m=+58.918347623" lastFinishedPulling="2025-01-17 12:23:04.635298431 +0000 UTC m=+59.289658686" observedRunningTime="2025-01-17 12:23:04.778720819 +0000 UTC m=+59.433081074" watchObservedRunningTime="2025-01-17 12:23:04.779017077 +0000 UTC m=+59.433377332" Jan 17 12:23:04.820001 kubelet[1783]: E0117 12:23:04.819952 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:05.135591 systemd-networkd[1391]: lxc54da6bd0b244: Gained IPv6LL Jan 17 12:23:05.780235 kubelet[1783]: E0117 12:23:05.780187 1783 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:05.820332 kubelet[1783]: E0117 12:23:05.820290 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:06.820587 kubelet[1783]: E0117 12:23:06.820525 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:07.821571 kubelet[1783]: E0117 12:23:07.821467 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:08.822138 kubelet[1783]: E0117 12:23:08.822102 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:08.910079 containerd[1459]: time="2025-01-17T12:23:08.910024970Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:23:08.918221 containerd[1459]: time="2025-01-17T12:23:08.918183971Z" level=info msg="StopContainer for \"dd508ea9cf99ad267d6ccb94e39fa5b98f9499c1d1290d95ed7e61391dcb1132\" with timeout 2 (s)" Jan 17 12:23:08.918428 containerd[1459]: time="2025-01-17T12:23:08.918408294Z" level=info msg="Stop container \"dd508ea9cf99ad267d6ccb94e39fa5b98f9499c1d1290d95ed7e61391dcb1132\" with signal terminated" Jan 17 12:23:08.925630 systemd-networkd[1391]: lxc_health: Link DOWN Jan 17 12:23:08.925642 systemd-networkd[1391]: lxc_health: Lost carrier Jan 17 12:23:08.957790 systemd[1]: cri-containerd-dd508ea9cf99ad267d6ccb94e39fa5b98f9499c1d1290d95ed7e61391dcb1132.scope: Deactivated successfully. Jan 17 12:23:08.958468 systemd[1]: cri-containerd-dd508ea9cf99ad267d6ccb94e39fa5b98f9499c1d1290d95ed7e61391dcb1132.scope: Consumed 7.313s CPU time. Jan 17 12:23:08.980474 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd508ea9cf99ad267d6ccb94e39fa5b98f9499c1d1290d95ed7e61391dcb1132-rootfs.mount: Deactivated successfully. Jan 17 12:23:09.188669 containerd[1459]: time="2025-01-17T12:23:09.188517487Z" level=info msg="shim disconnected" id=dd508ea9cf99ad267d6ccb94e39fa5b98f9499c1d1290d95ed7e61391dcb1132 namespace=k8s.io Jan 17 12:23:09.188669 containerd[1459]: time="2025-01-17T12:23:09.188575853Z" level=warning msg="cleaning up after shim disconnected" id=dd508ea9cf99ad267d6ccb94e39fa5b98f9499c1d1290d95ed7e61391dcb1132 namespace=k8s.io Jan 17 12:23:09.188669 containerd[1459]: time="2025-01-17T12:23:09.188588738Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:23:09.314635 containerd[1459]: time="2025-01-17T12:23:09.314567345Z" level=info msg="StopContainer for \"dd508ea9cf99ad267d6ccb94e39fa5b98f9499c1d1290d95ed7e61391dcb1132\" returns successfully" Jan 17 12:23:09.315396 containerd[1459]: time="2025-01-17T12:23:09.315340753Z" level=info msg="StopPodSandbox for \"652dd845106c2ebbd299b91005314630d35a53719c73fb9b6e3ae5f0b130d94f\"" Jan 17 12:23:09.315396 containerd[1459]: time="2025-01-17T12:23:09.315400531Z" level=info msg="Container to stop \"7af3a92e694df86c0a4871d353407141f0c205781654451f3a1210c7536c54f3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:23:09.315591 containerd[1459]: time="2025-01-17T12:23:09.315417024Z" level=info msg="Container to stop \"48c67b0d2f6455f31421e24ccfd43fc9c54d5780caff18a860a8118affac1087\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:23:09.315591 containerd[1459]: time="2025-01-17T12:23:09.315429177Z" level=info msg="Container to stop \"c343154bebd1130836d569bc60f569868200c0ac64682207e5a5eaad4b52211f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:23:09.315591 containerd[1459]: time="2025-01-17T12:23:09.315441331Z" level=info msg="Container to stop \"dd508ea9cf99ad267d6ccb94e39fa5b98f9499c1d1290d95ed7e61391dcb1132\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:23:09.315591 containerd[1459]: time="2025-01-17T12:23:09.315452553Z" level=info msg="Container to stop \"b809f2b4670df854e76fbcd3d074f765e57f1e1cf9372879e44fa1e715a74e02\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:23:09.317785 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-652dd845106c2ebbd299b91005314630d35a53719c73fb9b6e3ae5f0b130d94f-shm.mount: Deactivated successfully. Jan 17 12:23:09.322024 systemd[1]: cri-containerd-652dd845106c2ebbd299b91005314630d35a53719c73fb9b6e3ae5f0b130d94f.scope: Deactivated successfully. Jan 17 12:23:09.341253 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-652dd845106c2ebbd299b91005314630d35a53719c73fb9b6e3ae5f0b130d94f-rootfs.mount: Deactivated successfully. Jan 17 12:23:09.427985 containerd[1459]: time="2025-01-17T12:23:09.427936358Z" level=info msg="shim disconnected" id=652dd845106c2ebbd299b91005314630d35a53719c73fb9b6e3ae5f0b130d94f namespace=k8s.io Jan 17 12:23:09.427985 containerd[1459]: time="2025-01-17T12:23:09.427975897Z" level=warning msg="cleaning up after shim disconnected" id=652dd845106c2ebbd299b91005314630d35a53719c73fb9b6e3ae5f0b130d94f namespace=k8s.io Jan 17 12:23:09.427985 containerd[1459]: time="2025-01-17T12:23:09.427987038Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:23:09.443087 containerd[1459]: time="2025-01-17T12:23:09.443041803Z" level=info msg="TearDown network for sandbox \"652dd845106c2ebbd299b91005314630d35a53719c73fb9b6e3ae5f0b130d94f\" successfully" Jan 17 12:23:09.443087 containerd[1459]: time="2025-01-17T12:23:09.443075539Z" level=info msg="StopPodSandbox for \"652dd845106c2ebbd299b91005314630d35a53719c73fb9b6e3ae5f0b130d94f\" returns successfully" Jan 17 12:23:09.556075 kubelet[1783]: I0117 12:23:09.556020 1783 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/03f21fda-11b1-4a93-8ca1-41502cc623c7-xtables-lock\") pod \"03f21fda-11b1-4a93-8ca1-41502cc623c7\" (UID: \"03f21fda-11b1-4a93-8ca1-41502cc623c7\") " Jan 17 12:23:09.556075 kubelet[1783]: I0117 12:23:09.556073 1783 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03f21fda-11b1-4a93-8ca1-41502cc623c7-etc-cni-netd\") pod \"03f21fda-11b1-4a93-8ca1-41502cc623c7\" (UID: \"03f21fda-11b1-4a93-8ca1-41502cc623c7\") " Jan 17 12:23:09.556299 kubelet[1783]: I0117 12:23:09.556094 1783 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/03f21fda-11b1-4a93-8ca1-41502cc623c7-cni-path\") pod \"03f21fda-11b1-4a93-8ca1-41502cc623c7\" (UID: \"03f21fda-11b1-4a93-8ca1-41502cc623c7\") " Jan 17 12:23:09.556299 kubelet[1783]: I0117 12:23:09.556112 1783 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/03f21fda-11b1-4a93-8ca1-41502cc623c7-host-proc-sys-net\") pod \"03f21fda-11b1-4a93-8ca1-41502cc623c7\" (UID: \"03f21fda-11b1-4a93-8ca1-41502cc623c7\") " Jan 17 12:23:09.556299 kubelet[1783]: I0117 12:23:09.556130 1783 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/03f21fda-11b1-4a93-8ca1-41502cc623c7-bpf-maps\") pod \"03f21fda-11b1-4a93-8ca1-41502cc623c7\" (UID: \"03f21fda-11b1-4a93-8ca1-41502cc623c7\") " Jan 17 12:23:09.556299 kubelet[1783]: I0117 12:23:09.556147 1783 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/03f21fda-11b1-4a93-8ca1-41502cc623c7-cilium-cgroup\") pod \"03f21fda-11b1-4a93-8ca1-41502cc623c7\" (UID: \"03f21fda-11b1-4a93-8ca1-41502cc623c7\") " Jan 17 12:23:09.556299 kubelet[1783]: I0117 12:23:09.556127 1783 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03f21fda-11b1-4a93-8ca1-41502cc623c7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "03f21fda-11b1-4a93-8ca1-41502cc623c7" (UID: "03f21fda-11b1-4a93-8ca1-41502cc623c7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:23:09.556299 kubelet[1783]: I0117 12:23:09.556173 1783 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wt9sf\" (UniqueName: \"kubernetes.io/projected/03f21fda-11b1-4a93-8ca1-41502cc623c7-kube-api-access-wt9sf\") pod \"03f21fda-11b1-4a93-8ca1-41502cc623c7\" (UID: \"03f21fda-11b1-4a93-8ca1-41502cc623c7\") " Jan 17 12:23:09.556491 kubelet[1783]: I0117 12:23:09.556199 1783 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/03f21fda-11b1-4a93-8ca1-41502cc623c7-cilium-config-path\") pod \"03f21fda-11b1-4a93-8ca1-41502cc623c7\" (UID: \"03f21fda-11b1-4a93-8ca1-41502cc623c7\") " Jan 17 12:23:09.556491 kubelet[1783]: I0117 12:23:09.556179 1783 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03f21fda-11b1-4a93-8ca1-41502cc623c7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "03f21fda-11b1-4a93-8ca1-41502cc623c7" (UID: "03f21fda-11b1-4a93-8ca1-41502cc623c7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:23:09.556491 kubelet[1783]: I0117 12:23:09.556219 1783 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03f21fda-11b1-4a93-8ca1-41502cc623c7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "03f21fda-11b1-4a93-8ca1-41502cc623c7" (UID: "03f21fda-11b1-4a93-8ca1-41502cc623c7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:23:09.556491 kubelet[1783]: I0117 12:23:09.556216 1783 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/03f21fda-11b1-4a93-8ca1-41502cc623c7-hostproc\") pod \"03f21fda-11b1-4a93-8ca1-41502cc623c7\" (UID: \"03f21fda-11b1-4a93-8ca1-41502cc623c7\") " Jan 17 12:23:09.556491 kubelet[1783]: I0117 12:23:09.556255 1783 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03f21fda-11b1-4a93-8ca1-41502cc623c7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "03f21fda-11b1-4a93-8ca1-41502cc623c7" (UID: "03f21fda-11b1-4a93-8ca1-41502cc623c7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:23:09.556672 kubelet[1783]: I0117 12:23:09.556256 1783 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03f21fda-11b1-4a93-8ca1-41502cc623c7-hostproc" (OuterVolumeSpecName: "hostproc") pod "03f21fda-11b1-4a93-8ca1-41502cc623c7" (UID: "03f21fda-11b1-4a93-8ca1-41502cc623c7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:23:09.556672 kubelet[1783]: I0117 12:23:09.556269 1783 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03f21fda-11b1-4a93-8ca1-41502cc623c7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "03f21fda-11b1-4a93-8ca1-41502cc623c7" (UID: "03f21fda-11b1-4a93-8ca1-41502cc623c7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:23:09.556672 kubelet[1783]: I0117 12:23:09.556290 1783 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/03f21fda-11b1-4a93-8ca1-41502cc623c7-hubble-tls\") pod \"03f21fda-11b1-4a93-8ca1-41502cc623c7\" (UID: \"03f21fda-11b1-4a93-8ca1-41502cc623c7\") " Jan 17 12:23:09.556672 kubelet[1783]: I0117 12:23:09.556317 1783 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/03f21fda-11b1-4a93-8ca1-41502cc623c7-lib-modules\") pod \"03f21fda-11b1-4a93-8ca1-41502cc623c7\" (UID: \"03f21fda-11b1-4a93-8ca1-41502cc623c7\") " Jan 17 12:23:09.556672 kubelet[1783]: I0117 12:23:09.556337 1783 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/03f21fda-11b1-4a93-8ca1-41502cc623c7-clustermesh-secrets\") pod \"03f21fda-11b1-4a93-8ca1-41502cc623c7\" (UID: \"03f21fda-11b1-4a93-8ca1-41502cc623c7\") " Jan 17 12:23:09.556672 kubelet[1783]: I0117 12:23:09.556357 1783 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/03f21fda-11b1-4a93-8ca1-41502cc623c7-cilium-run\") pod \"03f21fda-11b1-4a93-8ca1-41502cc623c7\" (UID: \"03f21fda-11b1-4a93-8ca1-41502cc623c7\") " Jan 17 12:23:09.556861 kubelet[1783]: I0117 12:23:09.556409 1783 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/03f21fda-11b1-4a93-8ca1-41502cc623c7-host-proc-sys-kernel\") pod \"03f21fda-11b1-4a93-8ca1-41502cc623c7\" (UID: \"03f21fda-11b1-4a93-8ca1-41502cc623c7\") " Jan 17 12:23:09.556861 kubelet[1783]: I0117 12:23:09.556455 1783 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/03f21fda-11b1-4a93-8ca1-41502cc623c7-xtables-lock\") on node \"10.0.0.151\" DevicePath \"\"" Jan 17 12:23:09.556861 kubelet[1783]: I0117 12:23:09.556466 1783 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03f21fda-11b1-4a93-8ca1-41502cc623c7-etc-cni-netd\") on node \"10.0.0.151\" DevicePath \"\"" Jan 17 12:23:09.556861 kubelet[1783]: I0117 12:23:09.556475 1783 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/03f21fda-11b1-4a93-8ca1-41502cc623c7-cilium-cgroup\") on node \"10.0.0.151\" DevicePath \"\"" Jan 17 12:23:09.556861 kubelet[1783]: I0117 12:23:09.556485 1783 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/03f21fda-11b1-4a93-8ca1-41502cc623c7-host-proc-sys-net\") on node \"10.0.0.151\" DevicePath \"\"" Jan 17 12:23:09.556861 kubelet[1783]: I0117 12:23:09.556495 1783 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/03f21fda-11b1-4a93-8ca1-41502cc623c7-bpf-maps\") on node \"10.0.0.151\" DevicePath \"\"" Jan 17 12:23:09.556861 kubelet[1783]: I0117 12:23:09.556506 1783 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/03f21fda-11b1-4a93-8ca1-41502cc623c7-hostproc\") on node \"10.0.0.151\" DevicePath \"\"" Jan 17 12:23:09.557152 kubelet[1783]: I0117 12:23:09.556540 1783 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03f21fda-11b1-4a93-8ca1-41502cc623c7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "03f21fda-11b1-4a93-8ca1-41502cc623c7" (UID: "03f21fda-11b1-4a93-8ca1-41502cc623c7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:23:09.557152 kubelet[1783]: I0117 12:23:09.556240 1783 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03f21fda-11b1-4a93-8ca1-41502cc623c7-cni-path" (OuterVolumeSpecName: "cni-path") pod "03f21fda-11b1-4a93-8ca1-41502cc623c7" (UID: "03f21fda-11b1-4a93-8ca1-41502cc623c7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:23:09.559425 kubelet[1783]: I0117 12:23:09.557724 1783 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03f21fda-11b1-4a93-8ca1-41502cc623c7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "03f21fda-11b1-4a93-8ca1-41502cc623c7" (UID: "03f21fda-11b1-4a93-8ca1-41502cc623c7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:23:09.559425 kubelet[1783]: I0117 12:23:09.557760 1783 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03f21fda-11b1-4a93-8ca1-41502cc623c7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "03f21fda-11b1-4a93-8ca1-41502cc623c7" (UID: "03f21fda-11b1-4a93-8ca1-41502cc623c7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:23:09.560054 kubelet[1783]: I0117 12:23:09.559755 1783 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03f21fda-11b1-4a93-8ca1-41502cc623c7-kube-api-access-wt9sf" (OuterVolumeSpecName: "kube-api-access-wt9sf") pod "03f21fda-11b1-4a93-8ca1-41502cc623c7" (UID: "03f21fda-11b1-4a93-8ca1-41502cc623c7"). InnerVolumeSpecName "kube-api-access-wt9sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:23:09.560861 systemd[1]: var-lib-kubelet-pods-03f21fda\x2d11b1\x2d4a93\x2d8ca1\x2d41502cc623c7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwt9sf.mount: Deactivated successfully. Jan 17 12:23:09.560953 kubelet[1783]: I0117 12:23:09.560839 1783 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03f21fda-11b1-4a93-8ca1-41502cc623c7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "03f21fda-11b1-4a93-8ca1-41502cc623c7" (UID: "03f21fda-11b1-4a93-8ca1-41502cc623c7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 17 12:23:09.561329 kubelet[1783]: I0117 12:23:09.561246 1783 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03f21fda-11b1-4a93-8ca1-41502cc623c7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "03f21fda-11b1-4a93-8ca1-41502cc623c7" (UID: "03f21fda-11b1-4a93-8ca1-41502cc623c7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:23:09.561999 kubelet[1783]: I0117 12:23:09.561949 1783 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03f21fda-11b1-4a93-8ca1-41502cc623c7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "03f21fda-11b1-4a93-8ca1-41502cc623c7" (UID: "03f21fda-11b1-4a93-8ca1-41502cc623c7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 12:23:09.657057 kubelet[1783]: I0117 12:23:09.656975 1783 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/03f21fda-11b1-4a93-8ca1-41502cc623c7-cilium-config-path\") on node \"10.0.0.151\" DevicePath \"\"" Jan 17 12:23:09.657057 kubelet[1783]: I0117 12:23:09.657020 1783 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/03f21fda-11b1-4a93-8ca1-41502cc623c7-hubble-tls\") on node \"10.0.0.151\" DevicePath \"\"" Jan 17 12:23:09.657057 kubelet[1783]: I0117 12:23:09.657032 1783 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/03f21fda-11b1-4a93-8ca1-41502cc623c7-lib-modules\") on node \"10.0.0.151\" DevicePath \"\"" Jan 17 12:23:09.657057 kubelet[1783]: I0117 12:23:09.657043 1783 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/03f21fda-11b1-4a93-8ca1-41502cc623c7-clustermesh-secrets\") on node \"10.0.0.151\" DevicePath \"\"" Jan 17 12:23:09.657057 kubelet[1783]: I0117 12:23:09.657055 1783 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/03f21fda-11b1-4a93-8ca1-41502cc623c7-cilium-run\") on node \"10.0.0.151\" DevicePath \"\"" Jan 17 12:23:09.657057 kubelet[1783]: I0117 12:23:09.657067 1783 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/03f21fda-11b1-4a93-8ca1-41502cc623c7-host-proc-sys-kernel\") on node \"10.0.0.151\" DevicePath \"\"" Jan 17 12:23:09.657057 kubelet[1783]: I0117 12:23:09.657077 1783 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/03f21fda-11b1-4a93-8ca1-41502cc623c7-cni-path\") on node \"10.0.0.151\" DevicePath \"\"" Jan 17 12:23:09.657057 kubelet[1783]: I0117 12:23:09.657087 1783 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wt9sf\" (UniqueName: \"kubernetes.io/projected/03f21fda-11b1-4a93-8ca1-41502cc623c7-kube-api-access-wt9sf\") on node \"10.0.0.151\" DevicePath \"\"" Jan 17 12:23:09.781692 kubelet[1783]: I0117 12:23:09.781583 1783 scope.go:117] "RemoveContainer" containerID="dd508ea9cf99ad267d6ccb94e39fa5b98f9499c1d1290d95ed7e61391dcb1132" Jan 17 12:23:09.782596 containerd[1459]: time="2025-01-17T12:23:09.782566283Z" level=info msg="RemoveContainer for \"dd508ea9cf99ad267d6ccb94e39fa5b98f9499c1d1290d95ed7e61391dcb1132\"" Jan 17 12:23:09.787115 systemd[1]: Removed slice kubepods-burstable-pod03f21fda_11b1_4a93_8ca1_41502cc623c7.slice - libcontainer container kubepods-burstable-pod03f21fda_11b1_4a93_8ca1_41502cc623c7.slice. Jan 17 12:23:09.787206 systemd[1]: kubepods-burstable-pod03f21fda_11b1_4a93_8ca1_41502cc623c7.slice: Consumed 7.416s CPU time. Jan 17 12:23:09.822835 kubelet[1783]: E0117 12:23:09.822758 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:09.896713 systemd[1]: var-lib-kubelet-pods-03f21fda\x2d11b1\x2d4a93\x2d8ca1\x2d41502cc623c7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 17 12:23:09.896839 systemd[1]: var-lib-kubelet-pods-03f21fda\x2d11b1\x2d4a93\x2d8ca1\x2d41502cc623c7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 17 12:23:09.923518 containerd[1459]: time="2025-01-17T12:23:09.923479797Z" level=info msg="RemoveContainer for \"dd508ea9cf99ad267d6ccb94e39fa5b98f9499c1d1290d95ed7e61391dcb1132\" returns successfully" Jan 17 12:23:09.923961 kubelet[1783]: I0117 12:23:09.923769 1783 scope.go:117] "RemoveContainer" containerID="b809f2b4670df854e76fbcd3d074f765e57f1e1cf9372879e44fa1e715a74e02" Jan 17 12:23:09.925001 containerd[1459]: time="2025-01-17T12:23:09.924977695Z" level=info msg="RemoveContainer for \"b809f2b4670df854e76fbcd3d074f765e57f1e1cf9372879e44fa1e715a74e02\"" Jan 17 12:23:09.965642 containerd[1459]: time="2025-01-17T12:23:09.965578522Z" level=info msg="RemoveContainer for \"b809f2b4670df854e76fbcd3d074f765e57f1e1cf9372879e44fa1e715a74e02\" returns successfully" Jan 17 12:23:09.965916 kubelet[1783]: I0117 12:23:09.965883 1783 scope.go:117] "RemoveContainer" containerID="c343154bebd1130836d569bc60f569868200c0ac64682207e5a5eaad4b52211f" Jan 17 12:23:09.967181 containerd[1459]: time="2025-01-17T12:23:09.967128463Z" level=info msg="RemoveContainer for \"c343154bebd1130836d569bc60f569868200c0ac64682207e5a5eaad4b52211f\"" Jan 17 12:23:09.970971 containerd[1459]: time="2025-01-17T12:23:09.970936624Z" level=info msg="RemoveContainer for \"c343154bebd1130836d569bc60f569868200c0ac64682207e5a5eaad4b52211f\" returns successfully" Jan 17 12:23:09.971123 kubelet[1783]: I0117 12:23:09.971089 1783 scope.go:117] "RemoveContainer" containerID="48c67b0d2f6455f31421e24ccfd43fc9c54d5780caff18a860a8118affac1087" Jan 17 12:23:09.971971 containerd[1459]: time="2025-01-17T12:23:09.971946950Z" level=info msg="RemoveContainer for \"48c67b0d2f6455f31421e24ccfd43fc9c54d5780caff18a860a8118affac1087\"" Jan 17 12:23:09.975167 containerd[1459]: time="2025-01-17T12:23:09.975139514Z" level=info msg="RemoveContainer for \"48c67b0d2f6455f31421e24ccfd43fc9c54d5780caff18a860a8118affac1087\" returns successfully" Jan 17 12:23:09.975364 kubelet[1783]: I0117 12:23:09.975330 1783 scope.go:117] "RemoveContainer" containerID="7af3a92e694df86c0a4871d353407141f0c205781654451f3a1210c7536c54f3" Jan 17 12:23:09.976266 containerd[1459]: time="2025-01-17T12:23:09.976239777Z" level=info msg="RemoveContainer for \"7af3a92e694df86c0a4871d353407141f0c205781654451f3a1210c7536c54f3\"" Jan 17 12:23:09.979388 containerd[1459]: time="2025-01-17T12:23:09.979342576Z" level=info msg="RemoveContainer for \"7af3a92e694df86c0a4871d353407141f0c205781654451f3a1210c7536c54f3\" returns successfully" Jan 17 12:23:09.979507 kubelet[1783]: I0117 12:23:09.979484 1783 scope.go:117] "RemoveContainer" containerID="dd508ea9cf99ad267d6ccb94e39fa5b98f9499c1d1290d95ed7e61391dcb1132" Jan 17 12:23:09.979669 containerd[1459]: time="2025-01-17T12:23:09.979640845Z" level=error msg="ContainerStatus for \"dd508ea9cf99ad267d6ccb94e39fa5b98f9499c1d1290d95ed7e61391dcb1132\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dd508ea9cf99ad267d6ccb94e39fa5b98f9499c1d1290d95ed7e61391dcb1132\": not found" Jan 17 12:23:09.979825 kubelet[1783]: E0117 12:23:09.979796 1783 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dd508ea9cf99ad267d6ccb94e39fa5b98f9499c1d1290d95ed7e61391dcb1132\": not found" containerID="dd508ea9cf99ad267d6ccb94e39fa5b98f9499c1d1290d95ed7e61391dcb1132" Jan 17 12:23:09.979922 kubelet[1783]: I0117 12:23:09.979905 1783 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dd508ea9cf99ad267d6ccb94e39fa5b98f9499c1d1290d95ed7e61391dcb1132"} err="failed to get container status \"dd508ea9cf99ad267d6ccb94e39fa5b98f9499c1d1290d95ed7e61391dcb1132\": rpc error: code = NotFound desc = an error occurred when try to find container \"dd508ea9cf99ad267d6ccb94e39fa5b98f9499c1d1290d95ed7e61391dcb1132\": not found" Jan 17 12:23:09.979922 kubelet[1783]: I0117 12:23:09.979921 1783 scope.go:117] "RemoveContainer" containerID="b809f2b4670df854e76fbcd3d074f765e57f1e1cf9372879e44fa1e715a74e02" Jan 17 12:23:09.980082 containerd[1459]: time="2025-01-17T12:23:09.980052839Z" level=error msg="ContainerStatus for \"b809f2b4670df854e76fbcd3d074f765e57f1e1cf9372879e44fa1e715a74e02\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b809f2b4670df854e76fbcd3d074f765e57f1e1cf9372879e44fa1e715a74e02\": not found" Jan 17 12:23:09.980188 kubelet[1783]: E0117 12:23:09.980160 1783 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b809f2b4670df854e76fbcd3d074f765e57f1e1cf9372879e44fa1e715a74e02\": not found" containerID="b809f2b4670df854e76fbcd3d074f765e57f1e1cf9372879e44fa1e715a74e02" Jan 17 12:23:09.980188 kubelet[1783]: I0117 12:23:09.980187 1783 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b809f2b4670df854e76fbcd3d074f765e57f1e1cf9372879e44fa1e715a74e02"} err="failed to get container status \"b809f2b4670df854e76fbcd3d074f765e57f1e1cf9372879e44fa1e715a74e02\": rpc error: code = NotFound desc = an error occurred when try to find container \"b809f2b4670df854e76fbcd3d074f765e57f1e1cf9372879e44fa1e715a74e02\": not found" Jan 17 12:23:09.980282 kubelet[1783]: I0117 12:23:09.980197 1783 scope.go:117] "RemoveContainer" containerID="c343154bebd1130836d569bc60f569868200c0ac64682207e5a5eaad4b52211f" Jan 17 12:23:09.980400 containerd[1459]: time="2025-01-17T12:23:09.980350617Z" level=error msg="ContainerStatus for \"c343154bebd1130836d569bc60f569868200c0ac64682207e5a5eaad4b52211f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c343154bebd1130836d569bc60f569868200c0ac64682207e5a5eaad4b52211f\": not found" Jan 17 12:23:09.980515 kubelet[1783]: E0117 12:23:09.980499 1783 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c343154bebd1130836d569bc60f569868200c0ac64682207e5a5eaad4b52211f\": not found" containerID="c343154bebd1130836d569bc60f569868200c0ac64682207e5a5eaad4b52211f" Jan 17 12:23:09.980563 kubelet[1783]: I0117 12:23:09.980523 1783 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c343154bebd1130836d569bc60f569868200c0ac64682207e5a5eaad4b52211f"} err="failed to get container status \"c343154bebd1130836d569bc60f569868200c0ac64682207e5a5eaad4b52211f\": rpc error: code = NotFound desc = an error occurred when try to find container \"c343154bebd1130836d569bc60f569868200c0ac64682207e5a5eaad4b52211f\": not found" Jan 17 12:23:09.980563 kubelet[1783]: I0117 12:23:09.980536 1783 scope.go:117] "RemoveContainer" containerID="48c67b0d2f6455f31421e24ccfd43fc9c54d5780caff18a860a8118affac1087" Jan 17 12:23:09.980712 containerd[1459]: time="2025-01-17T12:23:09.980673855Z" level=error msg="ContainerStatus for \"48c67b0d2f6455f31421e24ccfd43fc9c54d5780caff18a860a8118affac1087\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"48c67b0d2f6455f31421e24ccfd43fc9c54d5780caff18a860a8118affac1087\": not found" Jan 17 12:23:09.980849 kubelet[1783]: E0117 12:23:09.980823 1783 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"48c67b0d2f6455f31421e24ccfd43fc9c54d5780caff18a860a8118affac1087\": not found" containerID="48c67b0d2f6455f31421e24ccfd43fc9c54d5780caff18a860a8118affac1087" Jan 17 12:23:09.980903 kubelet[1783]: I0117 12:23:09.980891 1783 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"48c67b0d2f6455f31421e24ccfd43fc9c54d5780caff18a860a8118affac1087"} err="failed to get container status \"48c67b0d2f6455f31421e24ccfd43fc9c54d5780caff18a860a8118affac1087\": rpc error: code = NotFound desc = an error occurred when try to find container \"48c67b0d2f6455f31421e24ccfd43fc9c54d5780caff18a860a8118affac1087\": not found" Jan 17 12:23:09.980935 kubelet[1783]: I0117 12:23:09.980913 1783 scope.go:117] "RemoveContainer" containerID="7af3a92e694df86c0a4871d353407141f0c205781654451f3a1210c7536c54f3" Jan 17 12:23:09.981179 containerd[1459]: time="2025-01-17T12:23:09.981115918Z" level=error msg="ContainerStatus for \"7af3a92e694df86c0a4871d353407141f0c205781654451f3a1210c7536c54f3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7af3a92e694df86c0a4871d353407141f0c205781654451f3a1210c7536c54f3\": not found" Jan 17 12:23:09.981290 kubelet[1783]: E0117 12:23:09.981274 1783 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7af3a92e694df86c0a4871d353407141f0c205781654451f3a1210c7536c54f3\": not found" containerID="7af3a92e694df86c0a4871d353407141f0c205781654451f3a1210c7536c54f3" Jan 17 12:23:09.981338 kubelet[1783]: I0117 12:23:09.981302 1783 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7af3a92e694df86c0a4871d353407141f0c205781654451f3a1210c7536c54f3"} err="failed to get container status \"7af3a92e694df86c0a4871d353407141f0c205781654451f3a1210c7536c54f3\": rpc error: code = NotFound desc = an error occurred when try to find container \"7af3a92e694df86c0a4871d353407141f0c205781654451f3a1210c7536c54f3\": not found" Jan 17 12:23:10.653873 kubelet[1783]: I0117 12:23:10.653829 1783 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="03f21fda-11b1-4a93-8ca1-41502cc623c7" path="/var/lib/kubelet/pods/03f21fda-11b1-4a93-8ca1-41502cc623c7/volumes" Jan 17 12:23:10.823398 kubelet[1783]: E0117 12:23:10.823355 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:11.456476 kubelet[1783]: I0117 12:23:11.456416 1783 topology_manager.go:215] "Topology Admit Handler" podUID="bb49efbd-9bef-4bd5-899b-c68786547ec7" podNamespace="kube-system" podName="cilium-lltg6" Jan 17 12:23:11.456476 kubelet[1783]: E0117 12:23:11.456471 1783 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="03f21fda-11b1-4a93-8ca1-41502cc623c7" containerName="mount-cgroup" Jan 17 12:23:11.456476 kubelet[1783]: E0117 12:23:11.456480 1783 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="03f21fda-11b1-4a93-8ca1-41502cc623c7" containerName="apply-sysctl-overwrites" Jan 17 12:23:11.456476 kubelet[1783]: E0117 12:23:11.456487 1783 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="03f21fda-11b1-4a93-8ca1-41502cc623c7" containerName="mount-bpf-fs" Jan 17 12:23:11.456476 kubelet[1783]: E0117 12:23:11.456493 1783 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="03f21fda-11b1-4a93-8ca1-41502cc623c7" containerName="clean-cilium-state" Jan 17 12:23:11.456741 kubelet[1783]: E0117 12:23:11.456500 1783 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="03f21fda-11b1-4a93-8ca1-41502cc623c7" containerName="cilium-agent" Jan 17 12:23:11.456741 kubelet[1783]: I0117 12:23:11.456520 1783 memory_manager.go:354] "RemoveStaleState removing state" podUID="03f21fda-11b1-4a93-8ca1-41502cc623c7" containerName="cilium-agent" Jan 17 12:23:11.458360 kubelet[1783]: I0117 12:23:11.458332 1783 topology_manager.go:215] "Topology Admit Handler" podUID="ca0e4dec-8dc8-417d-a00b-35da25c4a5fb" podNamespace="kube-system" podName="cilium-operator-5cc964979-kf5xh" Jan 17 12:23:11.464257 systemd[1]: Created slice kubepods-burstable-podbb49efbd_9bef_4bd5_899b_c68786547ec7.slice - libcontainer container kubepods-burstable-podbb49efbd_9bef_4bd5_899b_c68786547ec7.slice. Jan 17 12:23:11.487777 systemd[1]: Created slice kubepods-besteffort-podca0e4dec_8dc8_417d_a00b_35da25c4a5fb.slice - libcontainer container kubepods-besteffort-podca0e4dec_8dc8_417d_a00b_35da25c4a5fb.slice. Jan 17 12:23:11.567679 kubelet[1783]: I0117 12:23:11.567612 1783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bb49efbd-9bef-4bd5-899b-c68786547ec7-cilium-run\") pod \"cilium-lltg6\" (UID: \"bb49efbd-9bef-4bd5-899b-c68786547ec7\") " pod="kube-system/cilium-lltg6" Jan 17 12:23:11.567679 kubelet[1783]: I0117 12:23:11.567681 1783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bb49efbd-9bef-4bd5-899b-c68786547ec7-bpf-maps\") pod \"cilium-lltg6\" (UID: \"bb49efbd-9bef-4bd5-899b-c68786547ec7\") " pod="kube-system/cilium-lltg6" Jan 17 12:23:11.567882 kubelet[1783]: I0117 12:23:11.567719 1783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ca0e4dec-8dc8-417d-a00b-35da25c4a5fb-cilium-config-path\") pod \"cilium-operator-5cc964979-kf5xh\" (UID: \"ca0e4dec-8dc8-417d-a00b-35da25c4a5fb\") " pod="kube-system/cilium-operator-5cc964979-kf5xh" Jan 17 12:23:11.567882 kubelet[1783]: I0117 12:23:11.567754 1783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bb49efbd-9bef-4bd5-899b-c68786547ec7-host-proc-sys-kernel\") pod \"cilium-lltg6\" (UID: \"bb49efbd-9bef-4bd5-899b-c68786547ec7\") " pod="kube-system/cilium-lltg6" Jan 17 12:23:11.567882 kubelet[1783]: I0117 12:23:11.567783 1783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bb49efbd-9bef-4bd5-899b-c68786547ec7-etc-cni-netd\") pod \"cilium-lltg6\" (UID: \"bb49efbd-9bef-4bd5-899b-c68786547ec7\") " pod="kube-system/cilium-lltg6" Jan 17 12:23:11.567882 kubelet[1783]: I0117 12:23:11.567809 1783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bb49efbd-9bef-4bd5-899b-c68786547ec7-cilium-cgroup\") pod \"cilium-lltg6\" (UID: \"bb49efbd-9bef-4bd5-899b-c68786547ec7\") " pod="kube-system/cilium-lltg6" Jan 17 12:23:11.567882 kubelet[1783]: I0117 12:23:11.567835 1783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bb49efbd-9bef-4bd5-899b-c68786547ec7-cni-path\") pod \"cilium-lltg6\" (UID: \"bb49efbd-9bef-4bd5-899b-c68786547ec7\") " pod="kube-system/cilium-lltg6" Jan 17 12:23:11.568013 kubelet[1783]: I0117 12:23:11.567863 1783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztkpn\" (UniqueName: \"kubernetes.io/projected/ca0e4dec-8dc8-417d-a00b-35da25c4a5fb-kube-api-access-ztkpn\") pod \"cilium-operator-5cc964979-kf5xh\" (UID: \"ca0e4dec-8dc8-417d-a00b-35da25c4a5fb\") " pod="kube-system/cilium-operator-5cc964979-kf5xh" Jan 17 12:23:11.568013 kubelet[1783]: I0117 12:23:11.567923 1783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb49efbd-9bef-4bd5-899b-c68786547ec7-xtables-lock\") pod \"cilium-lltg6\" (UID: \"bb49efbd-9bef-4bd5-899b-c68786547ec7\") " pod="kube-system/cilium-lltg6" Jan 17 12:23:11.568067 kubelet[1783]: I0117 12:23:11.568025 1783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bb49efbd-9bef-4bd5-899b-c68786547ec7-cilium-config-path\") pod \"cilium-lltg6\" (UID: \"bb49efbd-9bef-4bd5-899b-c68786547ec7\") " pod="kube-system/cilium-lltg6" Jan 17 12:23:11.568090 kubelet[1783]: I0117 12:23:11.568075 1783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bb49efbd-9bef-4bd5-899b-c68786547ec7-hubble-tls\") pod \"cilium-lltg6\" (UID: \"bb49efbd-9bef-4bd5-899b-c68786547ec7\") " pod="kube-system/cilium-lltg6" Jan 17 12:23:11.568158 kubelet[1783]: I0117 12:23:11.568127 1783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bb49efbd-9bef-4bd5-899b-c68786547ec7-cilium-ipsec-secrets\") pod \"cilium-lltg6\" (UID: \"bb49efbd-9bef-4bd5-899b-c68786547ec7\") " pod="kube-system/cilium-lltg6" Jan 17 12:23:11.568283 kubelet[1783]: I0117 12:23:11.568211 1783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bb49efbd-9bef-4bd5-899b-c68786547ec7-host-proc-sys-net\") pod \"cilium-lltg6\" (UID: \"bb49efbd-9bef-4bd5-899b-c68786547ec7\") " pod="kube-system/cilium-lltg6" Jan 17 12:23:11.568435 kubelet[1783]: I0117 12:23:11.568300 1783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bb49efbd-9bef-4bd5-899b-c68786547ec7-clustermesh-secrets\") pod \"cilium-lltg6\" (UID: \"bb49efbd-9bef-4bd5-899b-c68786547ec7\") " pod="kube-system/cilium-lltg6" Jan 17 12:23:11.568435 kubelet[1783]: I0117 12:23:11.568343 1783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfcbk\" (UniqueName: \"kubernetes.io/projected/bb49efbd-9bef-4bd5-899b-c68786547ec7-kube-api-access-pfcbk\") pod \"cilium-lltg6\" (UID: \"bb49efbd-9bef-4bd5-899b-c68786547ec7\") " pod="kube-system/cilium-lltg6" Jan 17 12:23:11.568435 kubelet[1783]: I0117 12:23:11.568414 1783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bb49efbd-9bef-4bd5-899b-c68786547ec7-hostproc\") pod \"cilium-lltg6\" (UID: \"bb49efbd-9bef-4bd5-899b-c68786547ec7\") " pod="kube-system/cilium-lltg6" Jan 17 12:23:11.568554 kubelet[1783]: I0117 12:23:11.568451 1783 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb49efbd-9bef-4bd5-899b-c68786547ec7-lib-modules\") pod \"cilium-lltg6\" (UID: \"bb49efbd-9bef-4bd5-899b-c68786547ec7\") " pod="kube-system/cilium-lltg6" Jan 17 12:23:11.679663 kubelet[1783]: E0117 12:23:11.679473 1783 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 12:23:11.785138 kubelet[1783]: E0117 12:23:11.784988 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:11.785610 containerd[1459]: time="2025-01-17T12:23:11.785557173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lltg6,Uid:bb49efbd-9bef-4bd5-899b-c68786547ec7,Namespace:kube-system,Attempt:0,}" Jan 17 12:23:11.790612 kubelet[1783]: E0117 12:23:11.790581 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:11.791195 containerd[1459]: time="2025-01-17T12:23:11.791148635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-kf5xh,Uid:ca0e4dec-8dc8-417d-a00b-35da25c4a5fb,Namespace:kube-system,Attempt:0,}" Jan 17 12:23:11.816835 containerd[1459]: time="2025-01-17T12:23:11.816474650Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:23:11.816835 containerd[1459]: time="2025-01-17T12:23:11.816542635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:23:11.816835 containerd[1459]: time="2025-01-17T12:23:11.816592012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:11.816835 containerd[1459]: time="2025-01-17T12:23:11.816696058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:11.820182 containerd[1459]: time="2025-01-17T12:23:11.819895694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:23:11.820182 containerd[1459]: time="2025-01-17T12:23:11.819953058Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:23:11.820182 containerd[1459]: time="2025-01-17T12:23:11.819967807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:11.820286 containerd[1459]: time="2025-01-17T12:23:11.820066963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:23:11.823728 kubelet[1783]: E0117 12:23:11.823696 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:11.839590 systemd[1]: Started cri-containerd-50b95eaab4d1bfaabb5d99853e52f6b6e4725ce98dda00e5ebb4267147d8d8e7.scope - libcontainer container 50b95eaab4d1bfaabb5d99853e52f6b6e4725ce98dda00e5ebb4267147d8d8e7. Jan 17 12:23:11.843234 systemd[1]: Started cri-containerd-5827bbef471c0a1e65683ea880213c63b12e05d635c8a9b8a2e47fd16e011579.scope - libcontainer container 5827bbef471c0a1e65683ea880213c63b12e05d635c8a9b8a2e47fd16e011579. Jan 17 12:23:11.862663 containerd[1459]: time="2025-01-17T12:23:11.862609848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lltg6,Uid:bb49efbd-9bef-4bd5-899b-c68786547ec7,Namespace:kube-system,Attempt:0,} returns sandbox id \"50b95eaab4d1bfaabb5d99853e52f6b6e4725ce98dda00e5ebb4267147d8d8e7\"" Jan 17 12:23:11.863464 kubelet[1783]: E0117 12:23:11.863439 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:11.866113 containerd[1459]: time="2025-01-17T12:23:11.865976064Z" level=info msg="CreateContainer within sandbox \"50b95eaab4d1bfaabb5d99853e52f6b6e4725ce98dda00e5ebb4267147d8d8e7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 12:23:11.882742 containerd[1459]: time="2025-01-17T12:23:11.882665794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-kf5xh,Uid:ca0e4dec-8dc8-417d-a00b-35da25c4a5fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"5827bbef471c0a1e65683ea880213c63b12e05d635c8a9b8a2e47fd16e011579\"" Jan 17 12:23:11.883195 containerd[1459]: time="2025-01-17T12:23:11.883155981Z" level=info msg="CreateContainer within sandbox \"50b95eaab4d1bfaabb5d99853e52f6b6e4725ce98dda00e5ebb4267147d8d8e7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"69c836118d40e0a70fca1f260247137616c86679c4c1f40f01fd1f5647295134\"" Jan 17 12:23:11.883713 kubelet[1783]: E0117 12:23:11.883664 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:11.883926 containerd[1459]: time="2025-01-17T12:23:11.883745092Z" level=info msg="StartContainer for \"69c836118d40e0a70fca1f260247137616c86679c4c1f40f01fd1f5647295134\"" Jan 17 12:23:11.884914 containerd[1459]: time="2025-01-17T12:23:11.884854059Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 17 12:23:11.913542 systemd[1]: Started cri-containerd-69c836118d40e0a70fca1f260247137616c86679c4c1f40f01fd1f5647295134.scope - libcontainer container 69c836118d40e0a70fca1f260247137616c86679c4c1f40f01fd1f5647295134. Jan 17 12:23:11.941096 containerd[1459]: time="2025-01-17T12:23:11.941024889Z" level=info msg="StartContainer for \"69c836118d40e0a70fca1f260247137616c86679c4c1f40f01fd1f5647295134\" returns successfully" Jan 17 12:23:11.950810 systemd[1]: cri-containerd-69c836118d40e0a70fca1f260247137616c86679c4c1f40f01fd1f5647295134.scope: Deactivated successfully. Jan 17 12:23:12.090213 containerd[1459]: time="2025-01-17T12:23:12.090058263Z" level=info msg="shim disconnected" id=69c836118d40e0a70fca1f260247137616c86679c4c1f40f01fd1f5647295134 namespace=k8s.io Jan 17 12:23:12.090213 containerd[1459]: time="2025-01-17T12:23:12.090114575Z" level=warning msg="cleaning up after shim disconnected" id=69c836118d40e0a70fca1f260247137616c86679c4c1f40f01fd1f5647295134 namespace=k8s.io Jan 17 12:23:12.090489 containerd[1459]: time="2025-01-17T12:23:12.090358535Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:23:12.789060 kubelet[1783]: E0117 12:23:12.789028 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:12.790824 containerd[1459]: time="2025-01-17T12:23:12.790780124Z" level=info msg="CreateContainer within sandbox \"50b95eaab4d1bfaabb5d99853e52f6b6e4725ce98dda00e5ebb4267147d8d8e7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 12:23:12.815879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1887732390.mount: Deactivated successfully. Jan 17 12:23:12.824802 kubelet[1783]: E0117 12:23:12.824726 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:12.844589 containerd[1459]: time="2025-01-17T12:23:12.844518990Z" level=info msg="CreateContainer within sandbox \"50b95eaab4d1bfaabb5d99853e52f6b6e4725ce98dda00e5ebb4267147d8d8e7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d85c67a8cec685a1674d8a5b25ab35ef185fd47abb6ebba8563338874c10e4d0\"" Jan 17 12:23:12.845176 containerd[1459]: time="2025-01-17T12:23:12.845144873Z" level=info msg="StartContainer for \"d85c67a8cec685a1674d8a5b25ab35ef185fd47abb6ebba8563338874c10e4d0\"" Jan 17 12:23:12.877513 systemd[1]: Started cri-containerd-d85c67a8cec685a1674d8a5b25ab35ef185fd47abb6ebba8563338874c10e4d0.scope - libcontainer container d85c67a8cec685a1674d8a5b25ab35ef185fd47abb6ebba8563338874c10e4d0. Jan 17 12:23:12.920787 systemd[1]: cri-containerd-d85c67a8cec685a1674d8a5b25ab35ef185fd47abb6ebba8563338874c10e4d0.scope: Deactivated successfully. Jan 17 12:23:13.005485 containerd[1459]: time="2025-01-17T12:23:13.005425193Z" level=info msg="StartContainer for \"d85c67a8cec685a1674d8a5b25ab35ef185fd47abb6ebba8563338874c10e4d0\" returns successfully" Jan 17 12:23:13.046922 containerd[1459]: time="2025-01-17T12:23:13.046737780Z" level=info msg="shim disconnected" id=d85c67a8cec685a1674d8a5b25ab35ef185fd47abb6ebba8563338874c10e4d0 namespace=k8s.io Jan 17 12:23:13.046922 containerd[1459]: time="2025-01-17T12:23:13.046805744Z" level=warning msg="cleaning up after shim disconnected" id=d85c67a8cec685a1674d8a5b25ab35ef185fd47abb6ebba8563338874c10e4d0 namespace=k8s.io Jan 17 12:23:13.046922 containerd[1459]: time="2025-01-17T12:23:13.046817717Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:23:13.676830 systemd[1]: run-containerd-runc-k8s.io-d85c67a8cec685a1674d8a5b25ab35ef185fd47abb6ebba8563338874c10e4d0-runc.1irH5L.mount: Deactivated successfully. Jan 17 12:23:13.676941 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d85c67a8cec685a1674d8a5b25ab35ef185fd47abb6ebba8563338874c10e4d0-rootfs.mount: Deactivated successfully. Jan 17 12:23:13.792294 kubelet[1783]: E0117 12:23:13.792264 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:13.794035 containerd[1459]: time="2025-01-17T12:23:13.793989639Z" level=info msg="CreateContainer within sandbox \"50b95eaab4d1bfaabb5d99853e52f6b6e4725ce98dda00e5ebb4267147d8d8e7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 12:23:13.825194 kubelet[1783]: E0117 12:23:13.825159 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:14.157614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1711203495.mount: Deactivated successfully. Jan 17 12:23:14.195978 containerd[1459]: time="2025-01-17T12:23:14.195914588Z" level=info msg="CreateContainer within sandbox \"50b95eaab4d1bfaabb5d99853e52f6b6e4725ce98dda00e5ebb4267147d8d8e7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6ade13cf6875c82b52880b337c65b9292c59e424ae0904a2c5f596a37e480425\"" Jan 17 12:23:14.196521 containerd[1459]: time="2025-01-17T12:23:14.196490190Z" level=info msg="StartContainer for \"6ade13cf6875c82b52880b337c65b9292c59e424ae0904a2c5f596a37e480425\"" Jan 17 12:23:14.228495 systemd[1]: Started cri-containerd-6ade13cf6875c82b52880b337c65b9292c59e424ae0904a2c5f596a37e480425.scope - libcontainer container 6ade13cf6875c82b52880b337c65b9292c59e424ae0904a2c5f596a37e480425. Jan 17 12:23:14.257743 systemd[1]: cri-containerd-6ade13cf6875c82b52880b337c65b9292c59e424ae0904a2c5f596a37e480425.scope: Deactivated successfully. Jan 17 12:23:14.259252 containerd[1459]: time="2025-01-17T12:23:14.259208924Z" level=info msg="StartContainer for \"6ade13cf6875c82b52880b337c65b9292c59e424ae0904a2c5f596a37e480425\" returns successfully" Jan 17 12:23:14.285036 containerd[1459]: time="2025-01-17T12:23:14.284964195Z" level=info msg="shim disconnected" id=6ade13cf6875c82b52880b337c65b9292c59e424ae0904a2c5f596a37e480425 namespace=k8s.io Jan 17 12:23:14.285036 containerd[1459]: time="2025-01-17T12:23:14.285030105Z" level=warning msg="cleaning up after shim disconnected" id=6ade13cf6875c82b52880b337c65b9292c59e424ae0904a2c5f596a37e480425 namespace=k8s.io Jan 17 12:23:14.285306 containerd[1459]: time="2025-01-17T12:23:14.285040525Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:23:14.676861 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ade13cf6875c82b52880b337c65b9292c59e424ae0904a2c5f596a37e480425-rootfs.mount: Deactivated successfully. Jan 17 12:23:14.796510 kubelet[1783]: E0117 12:23:14.796478 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:14.798296 containerd[1459]: time="2025-01-17T12:23:14.798264134Z" level=info msg="CreateContainer within sandbox \"50b95eaab4d1bfaabb5d99853e52f6b6e4725ce98dda00e5ebb4267147d8d8e7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 12:23:14.825641 kubelet[1783]: E0117 12:23:14.825572 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:14.886092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1748086322.mount: Deactivated successfully. Jan 17 12:23:14.888460 containerd[1459]: time="2025-01-17T12:23:14.888410658Z" level=info msg="CreateContainer within sandbox \"50b95eaab4d1bfaabb5d99853e52f6b6e4725ce98dda00e5ebb4267147d8d8e7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7902a2beeefe5b7f23f27f82a7db00549b7f3f9b5887428f4eb55a193dbb795f\"" Jan 17 12:23:14.889088 containerd[1459]: time="2025-01-17T12:23:14.889039965Z" level=info msg="StartContainer for \"7902a2beeefe5b7f23f27f82a7db00549b7f3f9b5887428f4eb55a193dbb795f\"" Jan 17 12:23:14.920506 systemd[1]: Started cri-containerd-7902a2beeefe5b7f23f27f82a7db00549b7f3f9b5887428f4eb55a193dbb795f.scope - libcontainer container 7902a2beeefe5b7f23f27f82a7db00549b7f3f9b5887428f4eb55a193dbb795f. Jan 17 12:23:14.945159 systemd[1]: cri-containerd-7902a2beeefe5b7f23f27f82a7db00549b7f3f9b5887428f4eb55a193dbb795f.scope: Deactivated successfully. Jan 17 12:23:14.947729 containerd[1459]: time="2025-01-17T12:23:14.947690819Z" level=info msg="StartContainer for \"7902a2beeefe5b7f23f27f82a7db00549b7f3f9b5887428f4eb55a193dbb795f\" returns successfully" Jan 17 12:23:14.973780 containerd[1459]: time="2025-01-17T12:23:14.973708486Z" level=info msg="shim disconnected" id=7902a2beeefe5b7f23f27f82a7db00549b7f3f9b5887428f4eb55a193dbb795f namespace=k8s.io Jan 17 12:23:14.973780 containerd[1459]: time="2025-01-17T12:23:14.973778174Z" level=warning msg="cleaning up after shim disconnected" id=7902a2beeefe5b7f23f27f82a7db00549b7f3f9b5887428f4eb55a193dbb795f namespace=k8s.io Jan 17 12:23:14.973984 containerd[1459]: time="2025-01-17T12:23:14.973790678Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:23:15.676934 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7902a2beeefe5b7f23f27f82a7db00549b7f3f9b5887428f4eb55a193dbb795f-rootfs.mount: Deactivated successfully. Jan 17 12:23:15.799485 kubelet[1783]: E0117 12:23:15.799452 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:15.801217 containerd[1459]: time="2025-01-17T12:23:15.801183810Z" level=info msg="CreateContainer within sandbox \"50b95eaab4d1bfaabb5d99853e52f6b6e4725ce98dda00e5ebb4267147d8d8e7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 12:23:15.826582 kubelet[1783]: E0117 12:23:15.826515 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:16.363238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3017008672.mount: Deactivated successfully. Jan 17 12:23:16.364386 containerd[1459]: time="2025-01-17T12:23:16.364333653Z" level=info msg="CreateContainer within sandbox \"50b95eaab4d1bfaabb5d99853e52f6b6e4725ce98dda00e5ebb4267147d8d8e7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3813d0ce4c857ad2ee40874a940529d967fad1ee42f3982826ea172f25348fd4\"" Jan 17 12:23:16.364932 containerd[1459]: time="2025-01-17T12:23:16.364908832Z" level=info msg="StartContainer for \"3813d0ce4c857ad2ee40874a940529d967fad1ee42f3982826ea172f25348fd4\"" Jan 17 12:23:16.391708 systemd[1]: Started cri-containerd-3813d0ce4c857ad2ee40874a940529d967fad1ee42f3982826ea172f25348fd4.scope - libcontainer container 3813d0ce4c857ad2ee40874a940529d967fad1ee42f3982826ea172f25348fd4. Jan 17 12:23:16.422192 containerd[1459]: time="2025-01-17T12:23:16.422148889Z" level=info msg="StartContainer for \"3813d0ce4c857ad2ee40874a940529d967fad1ee42f3982826ea172f25348fd4\" returns successfully" Jan 17 12:23:16.804831 kubelet[1783]: E0117 12:23:16.804788 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:16.827002 kubelet[1783]: E0117 12:23:16.826937 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:16.872060 containerd[1459]: time="2025-01-17T12:23:16.872000321Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:16.873206 containerd[1459]: time="2025-01-17T12:23:16.873132935Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907161" Jan 17 12:23:16.875479 containerd[1459]: time="2025-01-17T12:23:16.874655856Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:23:16.875959 containerd[1459]: time="2025-01-17T12:23:16.875915190Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.991019139s" Jan 17 12:23:16.876056 containerd[1459]: time="2025-01-17T12:23:16.875961971Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 17 12:23:16.877770 containerd[1459]: time="2025-01-17T12:23:16.877741226Z" level=info msg="CreateContainer within sandbox \"5827bbef471c0a1e65683ea880213c63b12e05d635c8a9b8a2e47fd16e011579\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 17 12:23:16.878404 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 17 12:23:16.891910 containerd[1459]: time="2025-01-17T12:23:16.891664741Z" level=info msg="CreateContainer within sandbox \"5827bbef471c0a1e65683ea880213c63b12e05d635c8a9b8a2e47fd16e011579\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1b42b221b8ceb263c288b889353cb272a3346202cfe03be46f09035d57cb5df3\"" Jan 17 12:23:16.892281 containerd[1459]: time="2025-01-17T12:23:16.892244680Z" level=info msg="StartContainer for \"1b42b221b8ceb263c288b889353cb272a3346202cfe03be46f09035d57cb5df3\"" Jan 17 12:23:16.929590 systemd[1]: Started cri-containerd-1b42b221b8ceb263c288b889353cb272a3346202cfe03be46f09035d57cb5df3.scope - libcontainer container 1b42b221b8ceb263c288b889353cb272a3346202cfe03be46f09035d57cb5df3. Jan 17 12:23:17.036686 containerd[1459]: time="2025-01-17T12:23:17.036639954Z" level=info msg="StartContainer for \"1b42b221b8ceb263c288b889353cb272a3346202cfe03be46f09035d57cb5df3\" returns successfully" Jan 17 12:23:17.677159 systemd[1]: run-containerd-runc-k8s.io-1b42b221b8ceb263c288b889353cb272a3346202cfe03be46f09035d57cb5df3-runc.WGMvGE.mount: Deactivated successfully. Jan 17 12:23:17.807610 kubelet[1783]: E0117 12:23:17.807568 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:17.807925 kubelet[1783]: E0117 12:23:17.807904 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:17.828050 kubelet[1783]: E0117 12:23:17.827995 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:17.906302 kubelet[1783]: I0117 12:23:17.905716 1783 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-lltg6" podStartSLOduration=6.905667269 podStartE2EDuration="6.905667269s" podCreationTimestamp="2025-01-17 12:23:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:23:16.849349585 +0000 UTC m=+71.503709840" watchObservedRunningTime="2025-01-17 12:23:17.905667269 +0000 UTC m=+72.560027524" Jan 17 12:23:18.474184 systemd[1]: run-containerd-runc-k8s.io-3813d0ce4c857ad2ee40874a940529d967fad1ee42f3982826ea172f25348fd4-runc.I3WIHl.mount: Deactivated successfully. Jan 17 12:23:18.809290 kubelet[1783]: E0117 12:23:18.809161 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:18.829184 kubelet[1783]: E0117 12:23:18.829123 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:19.829727 kubelet[1783]: E0117 12:23:19.829653 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:20.207930 systemd-networkd[1391]: lxc_health: Link UP Jan 17 12:23:20.217651 systemd-networkd[1391]: lxc_health: Gained carrier Jan 17 12:23:20.830812 kubelet[1783]: E0117 12:23:20.830766 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:21.711748 systemd-networkd[1391]: lxc_health: Gained IPv6LL Jan 17 12:23:21.787273 kubelet[1783]: E0117 12:23:21.787241 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:21.806759 kubelet[1783]: I0117 12:23:21.806719 1783 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-kf5xh" podStartSLOduration=5.814966399 podStartE2EDuration="10.806626298s" podCreationTimestamp="2025-01-17 12:23:11 +0000 UTC" firstStartedPulling="2025-01-17 12:23:11.884496043 +0000 UTC m=+66.538856298" lastFinishedPulling="2025-01-17 12:23:16.876155942 +0000 UTC m=+71.530516197" observedRunningTime="2025-01-17 12:23:17.90626344 +0000 UTC m=+72.560623695" watchObservedRunningTime="2025-01-17 12:23:21.806626298 +0000 UTC m=+76.460986563" Jan 17 12:23:21.814252 kubelet[1783]: E0117 12:23:21.814221 1783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:23:21.831707 kubelet[1783]: E0117 12:23:21.831661 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:22.832662 kubelet[1783]: E0117 12:23:22.832630 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:23.833721 kubelet[1783]: E0117 12:23:23.833673 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:24.834223 kubelet[1783]: E0117 12:23:24.834155 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:25.779810 kubelet[1783]: E0117 12:23:25.779740 1783 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:25.834559 kubelet[1783]: E0117 12:23:25.834506 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:26.834914 kubelet[1783]: E0117 12:23:26.834871 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:27.835526 kubelet[1783]: E0117 12:23:27.835470 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:23:28.835931 kubelet[1783]: E0117 12:23:28.835868 1783 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"