Feb 13 19:31:12.088158 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:40:15 -00 2025 Feb 13 19:31:12.088202 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f28373bbaddf11103b551b595069cf5faacb27d62f1aab4f9911393ba418b416 Feb 13 19:31:12.088218 kernel: BIOS-provided physical RAM map: Feb 13 19:31:12.088230 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 19:31:12.088242 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 19:31:12.088254 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 19:31:12.088339 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Feb 13 19:31:12.088354 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Feb 13 19:31:12.088367 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Feb 13 19:31:12.088379 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 19:31:12.088392 kernel: NX (Execute Disable) protection: active Feb 13 19:31:12.088404 kernel: APIC: Static calls initialized Feb 13 19:31:12.088416 kernel: SMBIOS 2.7 present. Feb 13 19:31:12.088429 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Feb 13 19:31:12.088446 kernel: Hypervisor detected: KVM Feb 13 19:31:12.088459 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 19:31:12.088470 kernel: kvm-clock: using sched offset of 8577766369 cycles Feb 13 19:31:12.088485 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 19:31:12.088498 kernel: tsc: Detected 2499.996 MHz processor Feb 13 19:31:12.088570 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 19:31:12.088589 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 19:31:12.088606 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Feb 13 19:31:12.088857 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 19:31:12.088872 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 19:31:12.088886 kernel: Using GB pages for direct mapping Feb 13 19:31:12.088900 kernel: ACPI: Early table checksum verification disabled Feb 13 19:31:12.088914 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Feb 13 19:31:12.088925 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Feb 13 19:31:12.088937 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 19:31:12.088951 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Feb 13 19:31:12.088969 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Feb 13 19:31:12.088982 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 13 19:31:12.088995 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 19:31:12.089008 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Feb 13 19:31:12.089021 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 19:31:12.089035 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Feb 13 19:31:12.089048 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Feb 13 19:31:12.089062 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 13 19:31:12.089075 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Feb 13 19:31:12.089092 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Feb 13 19:31:12.089111 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Feb 13 19:31:12.089125 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Feb 13 19:31:12.089139 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Feb 13 19:31:12.089153 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Feb 13 19:31:12.089170 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Feb 13 19:31:12.089184 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Feb 13 19:31:12.089198 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Feb 13 19:31:12.089212 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Feb 13 19:31:12.089226 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 19:31:12.089240 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 19:31:12.089254 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Feb 13 19:31:12.089268 kernel: NUMA: Initialized distance table, cnt=1 Feb 13 19:31:12.089281 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Feb 13 19:31:12.089298 kernel: Zone ranges: Feb 13 19:31:12.089312 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 19:31:12.089326 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Feb 13 19:31:12.089340 kernel: Normal empty Feb 13 19:31:12.089354 kernel: Movable zone start for each node Feb 13 19:31:12.089367 kernel: Early memory node ranges Feb 13 19:31:12.089382 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 19:31:12.089396 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Feb 13 19:31:12.089410 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Feb 13 19:31:12.089424 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:31:12.089441 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 19:31:12.089455 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Feb 13 19:31:12.089469 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 13 19:31:12.089483 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 19:31:12.089497 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Feb 13 19:31:12.089511 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 19:31:12.089525 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 19:31:12.089539 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 19:31:12.089573 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 19:31:12.089590 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 19:31:12.089604 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 19:31:12.089616 kernel: TSC deadline timer available Feb 13 19:31:12.089628 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 19:31:12.089643 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 19:31:12.089656 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Feb 13 19:31:12.089668 kernel: Booting paravirtualized kernel on KVM Feb 13 19:31:12.089681 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 19:31:12.089694 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 19:31:12.089711 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 19:31:12.089727 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 19:31:12.089743 kernel: pcpu-alloc: [0] 0 1 Feb 13 19:31:12.089758 kernel: kvm-guest: PV spinlocks enabled Feb 13 19:31:12.089775 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 19:31:12.089794 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f28373bbaddf11103b551b595069cf5faacb27d62f1aab4f9911393ba418b416 Feb 13 19:31:12.089812 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:31:12.089827 kernel: random: crng init done Feb 13 19:31:12.089849 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:31:12.089866 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 19:31:12.089883 kernel: Fallback order for Node 0: 0 Feb 13 19:31:12.089900 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Feb 13 19:31:12.089915 kernel: Policy zone: DMA32 Feb 13 19:31:12.089932 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:31:12.089948 kernel: Memory: 1930300K/2057760K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43476K init, 1596K bss, 127200K reserved, 0K cma-reserved) Feb 13 19:31:12.089965 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 19:31:12.089981 kernel: Kernel/User page tables isolation: enabled Feb 13 19:31:12.090008 kernel: ftrace: allocating 37893 entries in 149 pages Feb 13 19:31:12.090024 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 19:31:12.090040 kernel: Dynamic Preempt: voluntary Feb 13 19:31:12.090056 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:31:12.090080 kernel: rcu: RCU event tracing is enabled. Feb 13 19:31:12.090095 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 19:31:12.090111 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:31:12.090129 kernel: Rude variant of Tasks RCU enabled. Feb 13 19:31:12.090145 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:31:12.090165 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:31:12.090181 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 19:31:12.090196 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 19:31:12.090213 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:31:12.090227 kernel: Console: colour VGA+ 80x25 Feb 13 19:31:12.090243 kernel: printk: console [ttyS0] enabled Feb 13 19:31:12.090259 kernel: ACPI: Core revision 20230628 Feb 13 19:31:12.090276 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Feb 13 19:31:12.090292 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 19:31:12.090312 kernel: x2apic enabled Feb 13 19:31:12.090330 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 19:31:12.090361 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Feb 13 19:31:12.090384 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Feb 13 19:31:12.090402 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 19:31:12.090419 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 19:31:12.090436 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 19:31:12.090452 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 19:31:12.090469 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 19:31:12.090486 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 19:31:12.090503 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 13 19:31:12.090520 kernel: RETBleed: Vulnerable Feb 13 19:31:12.090537 kernel: Speculative Store Bypass: Vulnerable Feb 13 19:31:12.090585 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 19:31:12.090602 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 19:31:12.090620 kernel: GDS: Unknown: Dependent on hypervisor status Feb 13 19:31:12.090640 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 19:31:12.090656 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 19:31:12.090671 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 19:31:12.090689 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 13 19:31:12.090704 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 13 19:31:12.090719 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 13 19:31:12.090734 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 13 19:31:12.090747 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 13 19:31:12.090762 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Feb 13 19:31:12.090779 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 19:31:12.090793 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 13 19:31:12.090806 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 13 19:31:12.090819 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Feb 13 19:31:12.090834 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Feb 13 19:31:12.090851 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Feb 13 19:31:12.090944 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Feb 13 19:31:12.090967 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Feb 13 19:31:12.090988 kernel: Freeing SMP alternatives memory: 32K Feb 13 19:31:12.091001 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:31:12.091014 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:31:12.091027 kernel: landlock: Up and running. Feb 13 19:31:12.091041 kernel: SELinux: Initializing. Feb 13 19:31:12.091057 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 19:31:12.091073 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 19:31:12.091089 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 13 19:31:12.091108 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:31:12.091121 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:31:12.091135 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:31:12.091149 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 13 19:31:12.091163 kernel: signal: max sigframe size: 3632 Feb 13 19:31:12.091176 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:31:12.091190 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:31:12.091202 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 19:31:12.091216 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:31:12.091233 kernel: smpboot: x86: Booting SMP configuration: Feb 13 19:31:12.091247 kernel: .... node #0, CPUs: #1 Feb 13 19:31:12.091262 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 13 19:31:12.091276 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 19:31:12.091290 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 19:31:12.091304 kernel: smpboot: Max logical packages: 1 Feb 13 19:31:12.091317 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Feb 13 19:31:12.091331 kernel: devtmpfs: initialized Feb 13 19:31:12.091345 kernel: x86/mm: Memory block size: 128MB Feb 13 19:31:12.091362 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:31:12.091376 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 19:31:12.091389 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:31:12.091402 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:31:12.091416 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:31:12.091430 kernel: audit: type=2000 audit(1739475071.706:1): state=initialized audit_enabled=0 res=1 Feb 13 19:31:12.091444 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:31:12.091457 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 19:31:12.091471 kernel: cpuidle: using governor menu Feb 13 19:31:12.091487 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:31:12.091501 kernel: dca service started, version 1.12.1 Feb 13 19:31:12.091515 kernel: PCI: Using configuration type 1 for base access Feb 13 19:31:12.091528 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 19:31:12.091559 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:31:12.091573 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:31:12.091585 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:31:12.091598 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:31:12.091611 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:31:12.091628 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:31:12.091642 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:31:12.091657 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:31:12.091673 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 13 19:31:12.091689 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 19:31:12.091704 kernel: ACPI: Interpreter enabled Feb 13 19:31:12.091721 kernel: ACPI: PM: (supports S0 S5) Feb 13 19:31:12.091736 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 19:31:12.091752 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 19:31:12.091772 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 19:31:12.091788 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 13 19:31:12.091804 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:31:12.092055 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:31:12.092193 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 19:31:12.092322 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 19:31:12.092340 kernel: acpiphp: Slot [3] registered Feb 13 19:31:12.092359 kernel: acpiphp: Slot [4] registered Feb 13 19:31:12.092374 kernel: acpiphp: Slot [5] registered Feb 13 19:31:12.092389 kernel: acpiphp: Slot [6] registered Feb 13 19:31:12.092404 kernel: acpiphp: Slot [7] registered Feb 13 19:31:12.092419 kernel: acpiphp: Slot [8] registered Feb 13 19:31:12.092434 kernel: acpiphp: Slot [9] registered Feb 13 19:31:12.092449 kernel: acpiphp: Slot [10] registered Feb 13 19:31:12.092465 kernel: acpiphp: Slot [11] registered Feb 13 19:31:12.092480 kernel: acpiphp: Slot [12] registered Feb 13 19:31:12.092498 kernel: acpiphp: Slot [13] registered Feb 13 19:31:12.092512 kernel: acpiphp: Slot [14] registered Feb 13 19:31:12.092527 kernel: acpiphp: Slot [15] registered Feb 13 19:31:12.092558 kernel: acpiphp: Slot [16] registered Feb 13 19:31:12.092573 kernel: acpiphp: Slot [17] registered Feb 13 19:31:12.092588 kernel: acpiphp: Slot [18] registered Feb 13 19:31:12.092603 kernel: acpiphp: Slot [19] registered Feb 13 19:31:12.092682 kernel: acpiphp: Slot [20] registered Feb 13 19:31:12.092697 kernel: acpiphp: Slot [21] registered Feb 13 19:31:12.092713 kernel: acpiphp: Slot [22] registered Feb 13 19:31:12.092732 kernel: acpiphp: Slot [23] registered Feb 13 19:31:12.092746 kernel: acpiphp: Slot [24] registered Feb 13 19:31:12.092762 kernel: acpiphp: Slot [25] registered Feb 13 19:31:12.092777 kernel: acpiphp: Slot [26] registered Feb 13 19:31:12.092792 kernel: acpiphp: Slot [27] registered Feb 13 19:31:12.092807 kernel: acpiphp: Slot [28] registered Feb 13 19:31:12.092822 kernel: acpiphp: Slot [29] registered Feb 13 19:31:12.092836 kernel: acpiphp: Slot [30] registered Feb 13 19:31:12.092850 kernel: acpiphp: Slot [31] registered Feb 13 19:31:12.092868 kernel: PCI host bridge to bus 0000:00 Feb 13 19:31:12.093055 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 19:31:12.093186 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 19:31:12.093313 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 19:31:12.093568 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 13 19:31:12.093697 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:31:12.093854 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 19:31:12.094018 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 13 19:31:12.094233 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Feb 13 19:31:12.094425 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 13 19:31:12.094819 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 13 19:31:12.095017 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Feb 13 19:31:12.095153 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Feb 13 19:31:12.095283 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Feb 13 19:31:12.095419 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Feb 13 19:31:12.095564 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Feb 13 19:31:12.095893 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Feb 13 19:31:12.096053 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Feb 13 19:31:12.096184 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Feb 13 19:31:12.096313 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 13 19:31:12.096443 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 19:31:12.096673 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 19:31:12.096821 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Feb 13 19:31:12.097041 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 19:31:12.097185 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Feb 13 19:31:12.097206 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 19:31:12.097222 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 19:31:12.097239 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 19:31:12.097260 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 19:31:12.097276 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 19:31:12.097292 kernel: iommu: Default domain type: Translated Feb 13 19:31:12.097309 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 19:31:12.097325 kernel: PCI: Using ACPI for IRQ routing Feb 13 19:31:12.097340 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 19:31:12.097357 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 19:31:12.097372 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Feb 13 19:31:12.097512 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Feb 13 19:31:12.097699 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Feb 13 19:31:12.097839 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 19:31:12.097859 kernel: vgaarb: loaded Feb 13 19:31:12.097927 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Feb 13 19:31:12.097945 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Feb 13 19:31:12.098046 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 19:31:12.098064 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:31:12.098081 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:31:12.098101 kernel: pnp: PnP ACPI init Feb 13 19:31:12.098117 kernel: pnp: PnP ACPI: found 5 devices Feb 13 19:31:12.098133 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 19:31:12.098149 kernel: NET: Registered PF_INET protocol family Feb 13 19:31:12.098165 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:31:12.098181 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 19:31:12.098197 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:31:12.098213 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 19:31:12.098230 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 19:31:12.098249 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 19:31:12.098265 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 19:31:12.098281 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 19:31:12.098298 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:31:12.098314 kernel: NET: Registered PF_XDP protocol family Feb 13 19:31:12.098526 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 19:31:12.098768 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 19:31:12.099206 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 19:31:12.099344 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 13 19:31:12.099488 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 19:31:12.099508 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:31:12.099523 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 19:31:12.099538 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Feb 13 19:31:12.099612 kernel: clocksource: Switched to clocksource tsc Feb 13 19:31:12.099625 kernel: Initialise system trusted keyrings Feb 13 19:31:12.099639 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 19:31:12.099658 kernel: Key type asymmetric registered Feb 13 19:31:12.099671 kernel: Asymmetric key parser 'x509' registered Feb 13 19:31:12.099684 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 19:31:12.099698 kernel: io scheduler mq-deadline registered Feb 13 19:31:12.099711 kernel: io scheduler kyber registered Feb 13 19:31:12.099725 kernel: io scheduler bfq registered Feb 13 19:31:12.099739 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 19:31:12.099753 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:31:12.099842 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 19:31:12.099863 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 19:31:12.099877 kernel: i8042: Warning: Keylock active Feb 13 19:31:12.100008 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 19:31:12.100026 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 19:31:12.100184 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 13 19:31:12.100308 kernel: rtc_cmos 00:00: registered as rtc0 Feb 13 19:31:12.100429 kernel: rtc_cmos 00:00: setting system clock to 2025-02-13T19:31:11 UTC (1739475071) Feb 13 19:31:12.100582 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 13 19:31:12.100608 kernel: intel_pstate: CPU model not supported Feb 13 19:31:12.100623 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:31:12.100638 kernel: Segment Routing with IPv6 Feb 13 19:31:12.100655 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:31:12.100671 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:31:12.100687 kernel: Key type dns_resolver registered Feb 13 19:31:12.100702 kernel: IPI shorthand broadcast: enabled Feb 13 19:31:12.100718 kernel: sched_clock: Marking stable (578002323, 280134277)->(962072131, -103935531) Feb 13 19:31:12.100734 kernel: registered taskstats version 1 Feb 13 19:31:12.100754 kernel: Loading compiled-in X.509 certificates Feb 13 19:31:12.100769 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6c364ddae48101e091a28279a8d953535f596d53' Feb 13 19:31:12.100784 kernel: Key type .fscrypt registered Feb 13 19:31:12.100800 kernel: Key type fscrypt-provisioning registered Feb 13 19:31:12.100816 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:31:12.100831 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:31:12.100847 kernel: ima: No architecture policies found Feb 13 19:31:12.100863 kernel: clk: Disabling unused clocks Feb 13 19:31:12.100879 kernel: Freeing unused kernel image (initmem) memory: 43476K Feb 13 19:31:12.100898 kernel: Write protecting the kernel read-only data: 38912k Feb 13 19:31:12.100914 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K Feb 13 19:31:12.100930 kernel: Run /init as init process Feb 13 19:31:12.100945 kernel: with arguments: Feb 13 19:31:12.100961 kernel: /init Feb 13 19:31:12.100976 kernel: with environment: Feb 13 19:31:12.100991 kernel: HOME=/ Feb 13 19:31:12.101007 kernel: TERM=linux Feb 13 19:31:12.101022 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:31:12.101047 systemd[1]: Successfully made /usr/ read-only. Feb 13 19:31:12.101082 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:31:12.101104 systemd[1]: Detected virtualization amazon. Feb 13 19:31:12.101120 systemd[1]: Detected architecture x86-64. Feb 13 19:31:12.101137 systemd[1]: Running in initrd. Feb 13 19:31:12.101157 systemd[1]: No hostname configured, using default hostname. Feb 13 19:31:12.101175 systemd[1]: Hostname set to . Feb 13 19:31:12.101192 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:31:12.101209 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:31:12.101228 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:31:12.101245 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:31:12.101264 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:31:12.101281 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:31:12.101302 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:31:12.101321 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:31:12.101340 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:31:12.101358 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:31:12.101377 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:31:12.101395 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:31:12.101416 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:31:12.101430 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:31:12.101447 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:31:12.101465 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:31:12.101483 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:31:12.101500 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:31:12.101519 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:31:12.101537 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 19:31:12.101590 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:31:12.101611 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:31:12.101628 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:31:12.101646 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:31:12.101665 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:31:12.101682 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:31:12.101699 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:31:12.101716 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 19:31:12.101741 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:31:12.101766 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:31:12.101784 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:31:12.101802 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:31:12.101820 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:31:12.101838 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:31:12.101860 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:31:12.101909 systemd-journald[179]: Collecting audit messages is disabled. Feb 13 19:31:12.101950 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:31:12.101974 systemd-journald[179]: Journal started Feb 13 19:31:12.102025 systemd-journald[179]: Runtime Journal (/run/log/journal/ec25ffe01400307896c3a38a950ba54c) is 4.8M, max 38.5M, 33.7M free. Feb 13 19:31:12.087833 systemd-modules-load[180]: Inserted module 'overlay' Feb 13 19:31:12.254729 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:31:12.254767 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:31:12.254795 kernel: Bridge firewalling registered Feb 13 19:31:12.140796 systemd-modules-load[180]: Inserted module 'br_netfilter' Feb 13 19:31:12.255665 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:31:12.263430 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:31:12.273572 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:31:12.287906 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:31:12.304847 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:31:12.311770 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:31:12.326103 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:31:12.340947 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:31:12.345611 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:31:12.351731 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:31:12.353747 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:31:12.369804 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:31:12.382813 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:31:12.397898 dracut-cmdline[214]: dracut-dracut-053 Feb 13 19:31:12.403188 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f28373bbaddf11103b551b595069cf5faacb27d62f1aab4f9911393ba418b416 Feb 13 19:31:12.459181 systemd-resolved[215]: Positive Trust Anchors: Feb 13 19:31:12.459197 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:31:12.459261 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:31:12.463651 systemd-resolved[215]: Defaulting to hostname 'linux'. Feb 13 19:31:12.466501 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:31:12.503163 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:31:12.615580 kernel: SCSI subsystem initialized Feb 13 19:31:12.630575 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:31:12.651576 kernel: iscsi: registered transport (tcp) Feb 13 19:31:12.678614 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:31:12.678701 kernel: QLogic iSCSI HBA Driver Feb 13 19:31:12.726983 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:31:12.739878 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:31:12.779586 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:31:12.779677 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:31:12.779698 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:31:12.827575 kernel: raid6: avx512x4 gen() 12307 MB/s Feb 13 19:31:12.844579 kernel: raid6: avx512x2 gen() 15020 MB/s Feb 13 19:31:12.863453 kernel: raid6: avx512x1 gen() 13182 MB/s Feb 13 19:31:12.879604 kernel: raid6: avx2x4 gen() 10158 MB/s Feb 13 19:31:12.896582 kernel: raid6: avx2x2 gen() 15177 MB/s Feb 13 19:31:12.913576 kernel: raid6: avx2x1 gen() 8307 MB/s Feb 13 19:31:12.913686 kernel: raid6: using algorithm avx2x2 gen() 15177 MB/s Feb 13 19:31:12.930959 kernel: raid6: .... xor() 16387 MB/s, rmw enabled Feb 13 19:31:12.931054 kernel: raid6: using avx512x2 recovery algorithm Feb 13 19:31:12.954572 kernel: xor: automatically using best checksumming function avx Feb 13 19:31:13.127571 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:31:13.139089 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:31:13.146737 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:31:13.167862 systemd-udevd[398]: Using default interface naming scheme 'v255'. Feb 13 19:31:13.174225 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:31:13.189976 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:31:13.221484 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Feb 13 19:31:13.291958 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:31:13.301931 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:31:13.389079 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:31:13.405310 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:31:13.430084 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:31:13.435513 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:31:13.438638 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:31:13.442460 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:31:13.450831 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:31:13.491140 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 19:31:13.508136 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 19:31:13.508342 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Feb 13 19:31:13.508522 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:34:d3:96:08:41 Feb 13 19:31:13.494584 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:31:13.523227 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 19:31:13.523515 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 13 19:31:13.527901 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 19:31:13.540879 (udev-worker)[458]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:31:13.545562 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 19:31:13.546810 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:31:13.547206 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:31:13.575136 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:31:13.575180 kernel: GPT:9289727 != 16777215 Feb 13 19:31:13.575201 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:31:13.575222 kernel: GPT:9289727 != 16777215 Feb 13 19:31:13.575242 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:31:13.575271 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:31:13.551788 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:31:13.554783 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:31:13.555076 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:31:13.559118 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:31:13.581931 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:31:13.590431 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:31:13.607581 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 19:31:13.607649 kernel: AES CTR mode by8 optimization enabled Feb 13 19:31:13.715577 kernel: BTRFS: device fsid 60f89c25-9096-4268-99ca-ef7992742f2b devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (456) Feb 13 19:31:13.731598 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (453) Feb 13 19:31:13.828933 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:31:13.846798 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:31:13.919831 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 19:31:13.924731 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:31:13.949768 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 19:31:13.950188 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 19:31:13.985236 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 19:31:14.011628 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:31:14.018843 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:31:14.029749 disk-uuid[632]: Primary Header is updated. Feb 13 19:31:14.029749 disk-uuid[632]: Secondary Entries is updated. Feb 13 19:31:14.029749 disk-uuid[632]: Secondary Header is updated. Feb 13 19:31:14.034566 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:31:15.048234 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:31:15.048819 disk-uuid[633]: The operation has completed successfully. Feb 13 19:31:15.241612 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:31:15.241751 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:31:15.319792 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:31:15.324263 sh[893]: Success Feb 13 19:31:15.348190 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 19:31:15.482781 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:31:15.496791 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:31:15.498054 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:31:15.529400 kernel: BTRFS info (device dm-0): first mount of filesystem 60f89c25-9096-4268-99ca-ef7992742f2b Feb 13 19:31:15.529479 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:31:15.529500 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:31:15.529519 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:31:15.530565 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:31:15.559579 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 19:31:15.562855 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:31:15.563887 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:31:15.576582 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:31:15.610907 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:31:15.643806 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:31:15.643886 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:31:15.647974 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:31:15.658579 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:31:15.686057 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:31:15.689578 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:31:15.696741 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:31:15.706885 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:31:15.778495 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:31:15.789848 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:31:15.884513 systemd-networkd[1087]: lo: Link UP Feb 13 19:31:15.884525 systemd-networkd[1087]: lo: Gained carrier Feb 13 19:31:15.887059 systemd-networkd[1087]: Enumeration completed Feb 13 19:31:15.888862 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:31:15.890165 systemd-networkd[1087]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:31:15.890170 systemd-networkd[1087]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:31:15.895718 systemd[1]: Reached target network.target - Network. Feb 13 19:31:15.903332 systemd-networkd[1087]: eth0: Link UP Feb 13 19:31:15.903342 systemd-networkd[1087]: eth0: Gained carrier Feb 13 19:31:15.903361 systemd-networkd[1087]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:31:15.920809 systemd-networkd[1087]: eth0: DHCPv4 address 172.31.18.16/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:31:15.989995 ignition[1015]: Ignition 2.20.0 Feb 13 19:31:15.990525 ignition[1015]: Stage: fetch-offline Feb 13 19:31:15.991155 ignition[1015]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:31:15.991169 ignition[1015]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:31:15.991636 ignition[1015]: Ignition finished successfully Feb 13 19:31:16.000374 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:31:16.015810 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 19:31:16.077611 ignition[1098]: Ignition 2.20.0 Feb 13 19:31:16.077626 ignition[1098]: Stage: fetch Feb 13 19:31:16.078356 ignition[1098]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:31:16.078371 ignition[1098]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:31:16.078505 ignition[1098]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:31:16.118095 ignition[1098]: PUT result: OK Feb 13 19:31:16.121513 ignition[1098]: parsed url from cmdline: "" Feb 13 19:31:16.121528 ignition[1098]: no config URL provided Feb 13 19:31:16.121539 ignition[1098]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:31:16.121581 ignition[1098]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:31:16.121608 ignition[1098]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:31:16.124122 ignition[1098]: PUT result: OK Feb 13 19:31:16.124187 ignition[1098]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 19:31:16.138609 unknown[1098]: fetched base config from "system" Feb 13 19:31:16.127532 ignition[1098]: GET result: OK Feb 13 19:31:16.138623 unknown[1098]: fetched base config from "system" Feb 13 19:31:16.127627 ignition[1098]: parsing config with SHA512: 31f9746415b0a8f037ef2179686b4d97a00c0b64bf16d9569279a2247ea0dd6307f98734def7e996ccf5689e01efe7eb80383ddf9d4b1a0818cdf808493561f6 Feb 13 19:31:16.138632 unknown[1098]: fetched user config from "aws" Feb 13 19:31:16.139045 ignition[1098]: fetch: fetch complete Feb 13 19:31:16.144153 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 19:31:16.139052 ignition[1098]: fetch: fetch passed Feb 13 19:31:16.152855 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:31:16.140343 ignition[1098]: Ignition finished successfully Feb 13 19:31:16.229424 ignition[1104]: Ignition 2.20.0 Feb 13 19:31:16.229457 ignition[1104]: Stage: kargs Feb 13 19:31:16.230215 ignition[1104]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:31:16.230231 ignition[1104]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:31:16.230356 ignition[1104]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:31:16.235131 ignition[1104]: PUT result: OK Feb 13 19:31:16.246148 ignition[1104]: kargs: kargs passed Feb 13 19:31:16.246241 ignition[1104]: Ignition finished successfully Feb 13 19:31:16.248981 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:31:16.261848 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:31:16.281329 ignition[1110]: Ignition 2.20.0 Feb 13 19:31:16.281343 ignition[1110]: Stage: disks Feb 13 19:31:16.281738 ignition[1110]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:31:16.281749 ignition[1110]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:31:16.281951 ignition[1110]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:31:16.283308 ignition[1110]: PUT result: OK Feb 13 19:31:16.292789 ignition[1110]: disks: disks passed Feb 13 19:31:16.292890 ignition[1110]: Ignition finished successfully Feb 13 19:31:16.295735 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:31:16.296496 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:31:16.302268 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:31:16.305076 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:31:16.305189 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:31:16.309845 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:31:16.315907 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:31:16.362296 systemd-fsck[1119]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:31:16.366042 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:31:16.530745 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:31:16.709923 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 157595f2-1515-4117-a2d1-73fe2ed647fc r/w with ordered data mode. Quota mode: none. Feb 13 19:31:16.710949 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:31:16.716257 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:31:16.724732 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:31:16.735961 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:31:16.736747 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:31:16.736820 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:31:16.736857 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:31:16.755859 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:31:16.763972 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:31:16.778570 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1138) Feb 13 19:31:16.781567 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:31:16.781640 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:31:16.782877 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:31:16.793573 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:31:16.799692 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:31:16.868715 initrd-setup-root[1162]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:31:16.877331 initrd-setup-root[1169]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:31:16.884496 initrd-setup-root[1176]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:31:16.891054 initrd-setup-root[1183]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:31:17.130993 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:31:17.141730 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:31:17.151066 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:31:17.171628 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:31:17.227661 ignition[1250]: INFO : Ignition 2.20.0 Feb 13 19:31:17.227661 ignition[1250]: INFO : Stage: mount Feb 13 19:31:17.228006 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:31:17.234830 ignition[1250]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:31:17.234830 ignition[1250]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:31:17.234830 ignition[1250]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:31:17.234830 ignition[1250]: INFO : PUT result: OK Feb 13 19:31:17.249429 ignition[1250]: INFO : mount: mount passed Feb 13 19:31:17.249429 ignition[1250]: INFO : Ignition finished successfully Feb 13 19:31:17.253258 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:31:17.259727 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:31:17.527158 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:31:17.544155 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:31:17.587645 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1263) Feb 13 19:31:17.592753 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:31:17.592902 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:31:17.592925 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:31:17.599849 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:31:17.602233 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:31:17.631958 ignition[1280]: INFO : Ignition 2.20.0 Feb 13 19:31:17.631958 ignition[1280]: INFO : Stage: files Feb 13 19:31:17.634474 ignition[1280]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:31:17.634474 ignition[1280]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:31:17.634474 ignition[1280]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:31:17.634474 ignition[1280]: INFO : PUT result: OK Feb 13 19:31:17.642503 ignition[1280]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:31:17.654719 ignition[1280]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:31:17.654719 ignition[1280]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:31:17.669913 ignition[1280]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:31:17.672084 ignition[1280]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:31:17.675514 unknown[1280]: wrote ssh authorized keys file for user: core Feb 13 19:31:17.677249 ignition[1280]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:31:17.680182 ignition[1280]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:31:17.684358 ignition[1280]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:31:17.687673 ignition[1280]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:31:17.687673 ignition[1280]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:31:17.687673 ignition[1280]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:31:17.687673 ignition[1280]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:31:17.687673 ignition[1280]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:31:17.687673 ignition[1280]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Feb 13 19:31:17.951806 systemd-networkd[1087]: eth0: Gained IPv6LL Feb 13 19:31:17.964842 ignition[1280]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 19:31:18.482480 ignition[1280]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:31:18.485212 ignition[1280]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:31:18.487339 ignition[1280]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:31:18.487339 ignition[1280]: INFO : files: files passed Feb 13 19:31:18.490627 ignition[1280]: INFO : Ignition finished successfully Feb 13 19:31:18.492020 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:31:18.508853 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:31:18.514221 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:31:18.518399 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:31:18.518634 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:31:18.561512 initrd-setup-root-after-ignition[1308]: grep: Feb 13 19:31:18.563211 initrd-setup-root-after-ignition[1312]: grep: Feb 13 19:31:18.564536 initrd-setup-root-after-ignition[1308]: /sysroot/etc/flatcar/enabled-sysext.conf Feb 13 19:31:18.563461 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:31:18.586447 initrd-setup-root-after-ignition[1312]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:31:18.575043 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:31:18.595250 initrd-setup-root-after-ignition[1308]: : No such file or directory Feb 13 19:31:18.599177 initrd-setup-root-after-ignition[1308]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:31:18.604067 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:31:18.651917 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:31:18.652058 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:31:18.657822 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:31:18.662240 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:31:18.667879 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:31:18.673783 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:31:18.697009 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:31:18.707800 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:31:18.726088 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:31:18.730152 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:31:18.731768 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:31:18.733006 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:31:18.733249 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:31:18.741642 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:31:18.743323 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:31:18.746557 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:31:18.748165 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:31:18.751907 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:31:18.756396 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:31:18.756562 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:31:18.762748 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:31:18.762910 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:31:18.768976 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:31:18.771097 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:31:18.771260 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:31:18.775050 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:31:18.777809 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:31:18.780959 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:31:18.782428 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:31:18.786258 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:31:18.787524 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:31:18.790843 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:31:18.792399 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:31:18.796213 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:31:18.796440 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:31:18.824016 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:31:18.830882 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:31:18.834113 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:31:18.834630 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:31:18.842510 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:31:18.842897 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:31:18.855569 ignition[1332]: INFO : Ignition 2.20.0 Feb 13 19:31:18.855569 ignition[1332]: INFO : Stage: umount Feb 13 19:31:18.858483 ignition[1332]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:31:18.858483 ignition[1332]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:31:18.858483 ignition[1332]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:31:18.856631 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:31:18.870019 ignition[1332]: INFO : PUT result: OK Feb 13 19:31:18.856733 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:31:18.874489 ignition[1332]: INFO : umount: umount passed Feb 13 19:31:18.874489 ignition[1332]: INFO : Ignition finished successfully Feb 13 19:31:18.874220 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:31:18.874372 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:31:18.876456 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:31:18.878757 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:31:18.879498 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:31:18.879595 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:31:18.880115 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 19:31:18.880173 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 19:31:18.880283 systemd[1]: Stopped target network.target - Network. Feb 13 19:31:18.880425 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:31:18.880471 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:31:18.882860 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:31:18.882969 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:31:18.899355 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:31:18.904252 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:31:18.907015 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:31:18.910608 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:31:18.910669 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:31:18.921181 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:31:18.921271 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:31:18.936874 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:31:18.936964 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:31:18.940232 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:31:18.940307 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:31:18.960657 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:31:18.962522 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:31:18.966478 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:31:18.972248 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:31:18.972450 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:31:18.979669 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 19:31:18.980206 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:31:18.980347 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:31:18.990532 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 19:31:18.993698 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:31:18.993779 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:31:19.003037 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:31:19.004307 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:31:19.004533 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:31:19.006877 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:31:19.006951 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:31:19.013467 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:31:19.016025 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:31:19.024603 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:31:19.024771 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:31:19.030445 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:31:19.052303 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 19:31:19.052432 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:31:19.081021 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:31:19.081189 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:31:19.085276 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:31:19.086653 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:31:19.092307 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:31:19.092557 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:31:19.096541 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:31:19.097289 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:31:19.103936 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:31:19.104172 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:31:19.117027 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:31:19.123828 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:31:19.127946 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:31:19.128027 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:31:19.149862 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:31:19.150020 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:31:19.150106 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:31:19.157367 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:31:19.157463 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:31:19.159128 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:31:19.159213 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:31:19.160714 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:31:19.160785 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:31:19.171358 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 19:31:19.171473 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:31:19.175153 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:31:19.176910 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:31:19.180770 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:31:19.180894 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:31:19.184425 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:31:19.187261 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:31:19.187386 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:31:19.201976 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:31:19.224893 systemd[1]: Switching root. Feb 13 19:31:19.258277 systemd-journald[179]: Journal stopped Feb 13 19:31:20.880039 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Feb 13 19:31:20.880132 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:31:20.880154 kernel: SELinux: policy capability open_perms=1 Feb 13 19:31:20.880171 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:31:20.880189 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:31:20.880209 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:31:20.880232 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:31:20.880250 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:31:20.880275 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:31:20.880294 kernel: audit: type=1403 audit(1739475079.533:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:31:20.880318 systemd[1]: Successfully loaded SELinux policy in 44.519ms. Feb 13 19:31:20.880345 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.610ms. Feb 13 19:31:20.880367 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:31:20.880388 systemd[1]: Detected virtualization amazon. Feb 13 19:31:20.880409 systemd[1]: Detected architecture x86-64. Feb 13 19:31:20.880428 systemd[1]: Detected first boot. Feb 13 19:31:20.880447 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:31:20.880470 zram_generator::config[1377]: No configuration found. Feb 13 19:31:20.880488 kernel: Guest personality initialized and is inactive Feb 13 19:31:20.880509 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Feb 13 19:31:20.880527 kernel: Initialized host personality Feb 13 19:31:20.882289 kernel: NET: Registered PF_VSOCK protocol family Feb 13 19:31:20.882330 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:31:20.882357 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 19:31:20.882379 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:31:20.882407 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:31:20.882429 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:31:20.882450 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:31:20.882471 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:31:20.882491 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:31:20.882513 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:31:20.882535 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:31:20.882573 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:31:20.882604 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:31:20.882628 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:31:20.882647 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:31:20.882668 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:31:20.882686 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:31:20.882705 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:31:20.882726 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:31:20.882745 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:31:20.882768 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:31:20.882787 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:31:20.882805 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:31:20.882823 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:31:20.882841 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:31:20.882865 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:31:20.882883 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:31:20.882901 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:31:20.882919 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:31:20.882937 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:31:20.882958 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:31:20.882978 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:31:20.883450 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 19:31:20.883482 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:31:20.883502 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:31:20.883528 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:31:20.883564 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:31:20.883583 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:31:20.883603 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:31:20.883628 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:31:20.883647 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:31:20.883666 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:31:20.883685 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:31:20.883704 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:31:20.883724 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:31:20.883744 systemd[1]: Reached target machines.target - Containers. Feb 13 19:31:20.883764 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:31:20.883788 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:31:20.883808 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:31:20.883827 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:31:20.883846 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:31:20.883864 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:31:20.883883 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:31:20.883901 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:31:20.883920 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:31:20.883943 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:31:20.883962 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:31:20.883981 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:31:20.884000 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:31:20.884018 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:31:20.884038 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:31:20.884058 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:31:20.884077 kernel: fuse: init (API version 7.39) Feb 13 19:31:20.884097 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:31:20.884136 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:31:20.884159 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:31:20.884182 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 19:31:20.884205 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:31:20.884227 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:31:20.884249 systemd[1]: Stopped verity-setup.service. Feb 13 19:31:20.884270 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:31:20.884295 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:31:20.884318 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:31:20.884341 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:31:20.884367 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:31:20.884390 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:31:20.884410 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:31:20.884433 kernel: ACPI: bus type drm_connector registered Feb 13 19:31:20.884456 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:31:20.884478 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:31:20.884500 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:31:20.884523 kernel: loop: module loaded Feb 13 19:31:20.887448 systemd-journald[1460]: Collecting audit messages is disabled. Feb 13 19:31:20.887510 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:31:20.887533 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:31:20.887584 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:31:20.887608 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:31:20.887630 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:31:20.887651 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:31:20.887673 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:31:20.887699 systemd-journald[1460]: Journal started Feb 13 19:31:20.887738 systemd-journald[1460]: Runtime Journal (/run/log/journal/ec25ffe01400307896c3a38a950ba54c) is 4.8M, max 38.5M, 33.7M free. Feb 13 19:31:20.433119 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:31:20.441852 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 19:31:20.442329 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:31:20.889616 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:31:20.889655 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:31:20.898684 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:31:20.905536 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:31:20.906037 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:31:20.908714 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:31:20.910868 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:31:20.913265 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:31:20.916117 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 19:31:20.939298 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:31:20.950672 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:31:20.964658 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:31:20.966715 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:31:20.966792 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:31:20.971793 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 19:31:20.987098 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:31:20.993997 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:31:20.995515 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:31:20.999373 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:31:21.005807 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:31:21.007399 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:31:21.017861 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:31:21.020714 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:31:21.027874 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:31:21.044310 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:31:21.056929 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:31:21.064292 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:31:21.070676 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:31:21.075229 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:31:21.119317 systemd-journald[1460]: Time spent on flushing to /var/log/journal/ec25ffe01400307896c3a38a950ba54c is 153.553ms for 962 entries. Feb 13 19:31:21.119317 systemd-journald[1460]: System Journal (/var/log/journal/ec25ffe01400307896c3a38a950ba54c) is 8M, max 195.6M, 187.6M free. Feb 13 19:31:21.305345 systemd-journald[1460]: Received client request to flush runtime journal. Feb 13 19:31:21.305411 kernel: loop0: detected capacity change from 0 to 62832 Feb 13 19:31:21.305439 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:31:21.305463 kernel: loop1: detected capacity change from 0 to 138176 Feb 13 19:31:21.159376 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:31:21.162115 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:31:21.170876 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 19:31:21.189941 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:31:21.207373 systemd-tmpfiles[1510]: ACLs are not supported, ignoring. Feb 13 19:31:21.207435 systemd-tmpfiles[1510]: ACLs are not supported, ignoring. Feb 13 19:31:21.213240 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:31:21.217359 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:31:21.245368 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:31:21.258807 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:31:21.267943 udevadm[1520]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:31:21.309883 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:31:21.333501 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 19:31:21.352379 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:31:21.362817 kernel: loop2: detected capacity change from 0 to 218376 Feb 13 19:31:21.364754 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:31:21.402361 systemd-tmpfiles[1533]: ACLs are not supported, ignoring. Feb 13 19:31:21.402599 systemd-tmpfiles[1533]: ACLs are not supported, ignoring. Feb 13 19:31:21.409971 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:31:21.446655 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:31:21.512579 kernel: loop3: detected capacity change from 0 to 147912 Feb 13 19:31:21.636170 kernel: loop4: detected capacity change from 0 to 62832 Feb 13 19:31:21.698592 kernel: loop5: detected capacity change from 0 to 138176 Feb 13 19:31:21.756784 kernel: loop6: detected capacity change from 0 to 218376 Feb 13 19:31:21.808576 kernel: loop7: detected capacity change from 0 to 147912 Feb 13 19:31:21.871798 (sd-merge)[1539]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 19:31:21.882187 (sd-merge)[1539]: Merged extensions into '/usr'. Feb 13 19:31:21.896128 systemd[1]: Reload requested from client PID 1509 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:31:21.896154 systemd[1]: Reloading... Feb 13 19:31:22.053671 zram_generator::config[1564]: No configuration found. Feb 13 19:31:22.314594 ldconfig[1504]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:31:22.406280 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:31:22.532684 systemd[1]: Reloading finished in 634 ms. Feb 13 19:31:22.554231 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:31:22.556688 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:31:22.573837 systemd[1]: Starting ensure-sysext.service... Feb 13 19:31:22.586283 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:31:22.614669 systemd[1]: Reload requested from client PID 1617 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:31:22.614706 systemd[1]: Reloading... Feb 13 19:31:22.645162 systemd-tmpfiles[1618]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:31:22.645591 systemd-tmpfiles[1618]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:31:22.648150 systemd-tmpfiles[1618]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:31:22.650833 systemd-tmpfiles[1618]: ACLs are not supported, ignoring. Feb 13 19:31:22.650917 systemd-tmpfiles[1618]: ACLs are not supported, ignoring. Feb 13 19:31:22.658905 systemd-tmpfiles[1618]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:31:22.659102 systemd-tmpfiles[1618]: Skipping /boot Feb 13 19:31:22.688833 systemd-tmpfiles[1618]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:31:22.688999 systemd-tmpfiles[1618]: Skipping /boot Feb 13 19:31:22.783570 zram_generator::config[1652]: No configuration found. Feb 13 19:31:22.910252 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:31:22.992720 systemd[1]: Reloading finished in 377 ms. Feb 13 19:31:23.010249 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:31:23.026698 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:31:23.046284 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:31:23.052930 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:31:23.065815 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:31:23.090634 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:31:23.107956 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:31:23.116969 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:31:23.124465 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:31:23.125405 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:31:23.140024 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:31:23.154418 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:31:23.158023 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:31:23.159431 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:31:23.159655 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:31:23.159810 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:31:23.171041 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:31:23.181019 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:31:23.181322 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:31:23.181594 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:31:23.181744 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:31:23.181898 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:31:23.191529 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:31:23.191904 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:31:23.202687 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:31:23.204514 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:31:23.204732 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:31:23.205004 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:31:23.208980 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:31:23.219713 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:31:23.220071 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:31:23.222440 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:31:23.222732 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:31:23.237350 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:31:23.244206 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:31:23.246699 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:31:23.256846 systemd[1]: Finished ensure-sysext.service. Feb 13 19:31:23.264520 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:31:23.267273 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:31:23.267938 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:31:23.274010 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:31:23.274296 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:31:23.282741 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:31:23.306099 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:31:23.308446 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:31:23.316851 augenrules[1736]: No rules Feb 13 19:31:23.319921 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:31:23.320196 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:31:23.322446 systemd-udevd[1707]: Using default interface naming scheme 'v255'. Feb 13 19:31:23.322720 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:31:23.327844 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:31:23.394572 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:31:23.406012 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:31:23.473837 systemd-resolved[1701]: Positive Trust Anchors: Feb 13 19:31:23.473857 systemd-resolved[1701]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:31:23.473911 systemd-resolved[1701]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:31:23.488275 systemd-resolved[1701]: Defaulting to hostname 'linux'. Feb 13 19:31:23.494514 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:31:23.496122 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:31:23.552909 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:31:23.578978 systemd-networkd[1748]: lo: Link UP Feb 13 19:31:23.579387 systemd-networkd[1748]: lo: Gained carrier Feb 13 19:31:23.580556 systemd-networkd[1748]: Enumeration completed Feb 13 19:31:23.580788 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:31:23.582621 systemd[1]: Reached target network.target - Network. Feb 13 19:31:23.590493 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 19:31:23.599784 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:31:23.615703 (udev-worker)[1751]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:31:23.644364 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 19:31:23.692567 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1753) Feb 13 19:31:23.695256 systemd-networkd[1748]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:31:23.695276 systemd-networkd[1748]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:31:23.701680 systemd-networkd[1748]: eth0: Link UP Feb 13 19:31:23.701869 systemd-networkd[1748]: eth0: Gained carrier Feb 13 19:31:23.701909 systemd-networkd[1748]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:31:23.711679 systemd-networkd[1748]: eth0: DHCPv4 address 172.31.18.16/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:31:23.737570 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 19:31:23.747641 kernel: ACPI: button: Power Button [PWRF] Feb 13 19:31:23.750114 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Feb 13 19:31:23.758570 kernel: ACPI: button: Sleep Button [SLPF] Feb 13 19:31:23.786585 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Feb 13 19:31:23.789574 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Feb 13 19:31:23.896569 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 19:31:23.908234 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:31:23.972402 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:31:23.976991 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:31:23.981331 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:31:23.991914 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:31:24.000985 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:31:24.032638 lvm[1867]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:31:24.061855 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:31:24.208711 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:31:24.215932 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:31:24.218759 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:31:24.221990 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:31:24.229379 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:31:24.233608 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:31:24.238609 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:31:24.240503 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:31:24.243257 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:31:24.244877 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:31:24.245056 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:31:24.246323 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:31:24.249501 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:31:24.253195 lvm[1873]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:31:24.254320 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:31:24.263356 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 19:31:24.265292 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 19:31:24.266691 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 19:31:24.276413 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:31:24.279084 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 19:31:24.283817 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:31:24.295781 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:31:24.299199 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:31:24.305100 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:31:24.305210 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:31:24.323273 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:31:24.336856 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 19:31:24.340674 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:31:24.362974 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:31:24.380360 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:31:24.386719 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:31:24.393896 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:31:24.406809 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 19:31:24.415002 jq[1880]: false Feb 13 19:31:24.417712 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 19:31:24.427791 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:31:24.436356 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:31:24.457826 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:31:24.460527 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:31:24.461316 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:31:24.469774 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:31:24.476580 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:31:24.480853 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:31:24.496167 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:31:24.496979 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:31:24.527270 dbus-daemon[1879]: [system] SELinux support is enabled Feb 13 19:31:24.557328 dbus-daemon[1879]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1748 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 19:31:24.561343 extend-filesystems[1881]: Found loop4 Feb 13 19:31:24.592431 extend-filesystems[1881]: Found loop5 Feb 13 19:31:24.592431 extend-filesystems[1881]: Found loop6 Feb 13 19:31:24.592431 extend-filesystems[1881]: Found loop7 Feb 13 19:31:24.592431 extend-filesystems[1881]: Found nvme0n1 Feb 13 19:31:24.592431 extend-filesystems[1881]: Found nvme0n1p1 Feb 13 19:31:24.592431 extend-filesystems[1881]: Found nvme0n1p2 Feb 13 19:31:24.592431 extend-filesystems[1881]: Found nvme0n1p3 Feb 13 19:31:24.592431 extend-filesystems[1881]: Found usr Feb 13 19:31:24.592431 extend-filesystems[1881]: Found nvme0n1p4 Feb 13 19:31:24.592431 extend-filesystems[1881]: Found nvme0n1p6 Feb 13 19:31:24.592431 extend-filesystems[1881]: Found nvme0n1p7 Feb 13 19:31:24.592431 extend-filesystems[1881]: Found nvme0n1p9 Feb 13 19:31:24.592431 extend-filesystems[1881]: Checking size of /dev/nvme0n1p9 Feb 13 19:31:24.574438 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:31:24.592788 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:31:24.593055 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:31:24.619094 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:31:24.619146 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:31:24.620807 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:31:24.620839 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:31:24.626486 dbus-daemon[1879]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 19:31:24.630611 ntpd[1885]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:04:11 UTC 2025 (1): Starting Feb 13 19:31:24.632305 ntpd[1885]: 13 Feb 19:31:24 ntpd[1885]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:04:11 UTC 2025 (1): Starting Feb 13 19:31:24.632305 ntpd[1885]: 13 Feb 19:31:24 ntpd[1885]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:31:24.632305 ntpd[1885]: 13 Feb 19:31:24 ntpd[1885]: ---------------------------------------------------- Feb 13 19:31:24.632305 ntpd[1885]: 13 Feb 19:31:24 ntpd[1885]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:31:24.632305 ntpd[1885]: 13 Feb 19:31:24 ntpd[1885]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:31:24.632305 ntpd[1885]: 13 Feb 19:31:24 ntpd[1885]: corporation. Support and training for ntp-4 are Feb 13 19:31:24.632305 ntpd[1885]: 13 Feb 19:31:24 ntpd[1885]: available at https://www.nwtime.org/support Feb 13 19:31:24.632305 ntpd[1885]: 13 Feb 19:31:24 ntpd[1885]: ---------------------------------------------------- Feb 13 19:31:24.630645 ntpd[1885]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:31:24.630657 ntpd[1885]: ---------------------------------------------------- Feb 13 19:31:24.630667 ntpd[1885]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:31:24.630677 ntpd[1885]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:31:24.630687 ntpd[1885]: corporation. Support and training for ntp-4 are Feb 13 19:31:24.630697 ntpd[1885]: available at https://www.nwtime.org/support Feb 13 19:31:24.630706 ntpd[1885]: ---------------------------------------------------- Feb 13 19:31:24.634334 coreos-metadata[1878]: Feb 13 19:31:24.633 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:31:24.636244 coreos-metadata[1878]: Feb 13 19:31:24.635 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 19:31:24.636244 coreos-metadata[1878]: Feb 13 19:31:24.635 INFO Fetch successful Feb 13 19:31:24.636244 coreos-metadata[1878]: Feb 13 19:31:24.636 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 19:31:24.636602 coreos-metadata[1878]: Feb 13 19:31:24.636 INFO Fetch successful Feb 13 19:31:24.636602 coreos-metadata[1878]: Feb 13 19:31:24.636 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 19:31:24.637226 ntpd[1885]: proto: precision = 0.064 usec (-24) Feb 13 19:31:24.637686 ntpd[1885]: 13 Feb 19:31:24 ntpd[1885]: proto: precision = 0.064 usec (-24) Feb 13 19:31:24.638239 coreos-metadata[1878]: Feb 13 19:31:24.637 INFO Fetch successful Feb 13 19:31:24.638239 coreos-metadata[1878]: Feb 13 19:31:24.638 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 19:31:24.638673 ntpd[1885]: basedate set to 2025-02-01 Feb 13 19:31:24.659782 ntpd[1885]: 13 Feb 19:31:24 ntpd[1885]: basedate set to 2025-02-01 Feb 13 19:31:24.659782 ntpd[1885]: 13 Feb 19:31:24 ntpd[1885]: gps base set to 2025-02-02 (week 2352) Feb 13 19:31:24.659782 ntpd[1885]: 13 Feb 19:31:24 ntpd[1885]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:31:24.659782 ntpd[1885]: 13 Feb 19:31:24 ntpd[1885]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:31:24.659782 ntpd[1885]: 13 Feb 19:31:24 ntpd[1885]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:31:24.659782 ntpd[1885]: 13 Feb 19:31:24 ntpd[1885]: Listen normally on 3 eth0 172.31.18.16:123 Feb 13 19:31:24.659782 ntpd[1885]: 13 Feb 19:31:24 ntpd[1885]: Listen normally on 4 lo [::1]:123 Feb 13 19:31:24.659782 ntpd[1885]: 13 Feb 19:31:24 ntpd[1885]: bind(21) AF_INET6 fe80::434:d3ff:fe96:841%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:31:24.659782 ntpd[1885]: 13 Feb 19:31:24 ntpd[1885]: unable to create socket on eth0 (5) for fe80::434:d3ff:fe96:841%2#123 Feb 13 19:31:24.659782 ntpd[1885]: 13 Feb 19:31:24 ntpd[1885]: failed to init interface for address fe80::434:d3ff:fe96:841%2 Feb 13 19:31:24.659782 ntpd[1885]: 13 Feb 19:31:24 ntpd[1885]: Listening on routing socket on fd #21 for interface updates Feb 13 19:31:24.660193 coreos-metadata[1878]: Feb 13 19:31:24.649 INFO Fetch successful Feb 13 19:31:24.660193 coreos-metadata[1878]: Feb 13 19:31:24.649 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 19:31:24.650627 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 19:31:24.660356 jq[1892]: true Feb 13 19:31:24.638697 ntpd[1885]: gps base set to 2025-02-02 (week 2352) Feb 13 19:31:24.651202 ntpd[1885]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:31:24.651262 ntpd[1885]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:31:24.655735 ntpd[1885]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:31:24.655791 ntpd[1885]: Listen normally on 3 eth0 172.31.18.16:123 Feb 13 19:31:24.655846 ntpd[1885]: Listen normally on 4 lo [::1]:123 Feb 13 19:31:24.655905 ntpd[1885]: bind(21) AF_INET6 fe80::434:d3ff:fe96:841%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:31:24.655931 ntpd[1885]: unable to create socket on eth0 (5) for fe80::434:d3ff:fe96:841%2#123 Feb 13 19:31:24.655950 ntpd[1885]: failed to init interface for address fe80::434:d3ff:fe96:841%2 Feb 13 19:31:24.655990 ntpd[1885]: Listening on routing socket on fd #21 for interface updates Feb 13 19:31:24.672869 coreos-metadata[1878]: Feb 13 19:31:24.669 INFO Fetch failed with 404: resource not found Feb 13 19:31:24.672869 coreos-metadata[1878]: Feb 13 19:31:24.669 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 19:31:24.672869 coreos-metadata[1878]: Feb 13 19:31:24.670 INFO Fetch successful Feb 13 19:31:24.672869 coreos-metadata[1878]: Feb 13 19:31:24.670 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 19:31:24.672869 coreos-metadata[1878]: Feb 13 19:31:24.672 INFO Fetch successful Feb 13 19:31:24.672869 coreos-metadata[1878]: Feb 13 19:31:24.672 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 19:31:24.676131 ntpd[1885]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:31:24.686658 coreos-metadata[1878]: Feb 13 19:31:24.678 INFO Fetch successful Feb 13 19:31:24.686658 coreos-metadata[1878]: Feb 13 19:31:24.678 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 19:31:24.686814 ntpd[1885]: 13 Feb 19:31:24 ntpd[1885]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:31:24.686814 ntpd[1885]: 13 Feb 19:31:24 ntpd[1885]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:31:24.676171 ntpd[1885]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:31:24.689620 (ntainerd)[1908]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:31:24.697344 coreos-metadata[1878]: Feb 13 19:31:24.690 INFO Fetch successful Feb 13 19:31:24.697344 coreos-metadata[1878]: Feb 13 19:31:24.690 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 19:31:24.697430 extend-filesystems[1881]: Resized partition /dev/nvme0n1p9 Feb 13 19:31:24.691039 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 19:31:24.706009 extend-filesystems[1926]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:31:24.715697 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 19:31:24.695474 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:31:24.715975 coreos-metadata[1878]: Feb 13 19:31:24.706 INFO Fetch successful Feb 13 19:31:24.716025 jq[1917]: true Feb 13 19:31:24.696394 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:31:24.723765 update_engine[1891]: I20250213 19:31:24.717516 1891 main.cc:92] Flatcar Update Engine starting Feb 13 19:31:24.739784 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:31:24.744931 update_engine[1891]: I20250213 19:31:24.744853 1891 update_check_scheduler.cc:74] Next update check in 3m0s Feb 13 19:31:24.754128 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:31:24.818736 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 19:31:24.834097 extend-filesystems[1926]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 19:31:24.834097 extend-filesystems[1926]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:31:24.834097 extend-filesystems[1926]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 19:31:24.839000 extend-filesystems[1881]: Resized filesystem in /dev/nvme0n1p9 Feb 13 19:31:24.845603 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:31:24.846012 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:31:24.851163 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 19:31:24.873672 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:31:24.890096 bash[1956]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:31:24.898413 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:31:24.912485 systemd[1]: Starting sshkeys.service... Feb 13 19:31:24.952587 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1756) Feb 13 19:31:25.021288 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 19:31:25.036115 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 19:31:25.059645 systemd-networkd[1748]: eth0: Gained IPv6LL Feb 13 19:31:25.079510 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:31:25.082685 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:31:25.091130 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 19:31:25.106889 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:31:25.119014 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:31:25.157918 systemd-logind[1890]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 19:31:25.162992 systemd-logind[1890]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 13 19:31:25.163025 systemd-logind[1890]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 19:31:25.174065 systemd-logind[1890]: New seat seat0. Feb 13 19:31:25.181664 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:31:25.283125 amazon-ssm-agent[1981]: Initializing new seelog logger Feb 13 19:31:25.283125 amazon-ssm-agent[1981]: New Seelog Logger Creation Complete Feb 13 19:31:25.283125 amazon-ssm-agent[1981]: 2025/02/13 19:31:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:31:25.283125 amazon-ssm-agent[1981]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:31:25.283645 amazon-ssm-agent[1981]: 2025/02/13 19:31:25 processing appconfig overrides Feb 13 19:31:25.295583 amazon-ssm-agent[1981]: 2025/02/13 19:31:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:31:25.295583 amazon-ssm-agent[1981]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:31:25.295583 amazon-ssm-agent[1981]: 2025/02/13 19:31:25 processing appconfig overrides Feb 13 19:31:25.295583 amazon-ssm-agent[1981]: 2025/02/13 19:31:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:31:25.295583 amazon-ssm-agent[1981]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:31:25.295583 amazon-ssm-agent[1981]: 2025/02/13 19:31:25 processing appconfig overrides Feb 13 19:31:25.309652 amazon-ssm-agent[1981]: 2025-02-13 19:31:25 INFO Proxy environment variables: Feb 13 19:31:25.317783 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:31:25.346979 amazon-ssm-agent[1981]: 2025/02/13 19:31:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:31:25.346979 amazon-ssm-agent[1981]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:31:25.347128 amazon-ssm-agent[1981]: 2025/02/13 19:31:25 processing appconfig overrides Feb 13 19:31:25.410934 amazon-ssm-agent[1981]: 2025-02-13 19:31:25 INFO https_proxy: Feb 13 19:31:25.414307 locksmithd[1932]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:31:25.444110 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 19:31:25.456340 dbus-daemon[1879]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 19:31:25.459064 dbus-daemon[1879]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1916 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 19:31:25.470986 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 19:31:25.506943 coreos-metadata[1963]: Feb 13 19:31:25.506 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:31:25.506943 coreos-metadata[1963]: Feb 13 19:31:25.506 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 19:31:25.506943 coreos-metadata[1963]: Feb 13 19:31:25.506 INFO Fetch successful Feb 13 19:31:25.506943 coreos-metadata[1963]: Feb 13 19:31:25.506 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 19:31:25.507483 coreos-metadata[1963]: Feb 13 19:31:25.507 INFO Fetch successful Feb 13 19:31:25.514647 amazon-ssm-agent[1981]: 2025-02-13 19:31:25 INFO http_proxy: Feb 13 19:31:25.509139 unknown[1963]: wrote ssh authorized keys file for user: core Feb 13 19:31:25.524006 polkitd[2078]: Started polkitd version 121 Feb 13 19:31:25.546629 update-ssh-keys[2085]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:31:25.549109 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 19:31:25.564071 systemd[1]: Finished sshkeys.service. Feb 13 19:31:25.572259 polkitd[2078]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 19:31:25.572358 polkitd[2078]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 19:31:25.585707 polkitd[2078]: Finished loading, compiling and executing 2 rules Feb 13 19:31:25.587297 dbus-daemon[1879]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 19:31:25.588606 polkitd[2078]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 19:31:25.588830 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 19:31:25.614788 amazon-ssm-agent[1981]: 2025-02-13 19:31:25 INFO no_proxy: Feb 13 19:31:25.649362 systemd-hostnamed[1916]: Hostname set to (transient) Feb 13 19:31:25.649885 systemd-resolved[1701]: System hostname changed to 'ip-172-31-18-16'. Feb 13 19:31:25.714879 amazon-ssm-agent[1981]: 2025-02-13 19:31:25 INFO Checking if agent identity type OnPrem can be assumed Feb 13 19:31:25.765367 containerd[1908]: time="2025-02-13T19:31:25.765244125Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:31:25.818562 amazon-ssm-agent[1981]: 2025-02-13 19:31:25 INFO Checking if agent identity type EC2 can be assumed Feb 13 19:31:25.876675 containerd[1908]: time="2025-02-13T19:31:25.875260665Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:31:25.877682 containerd[1908]: time="2025-02-13T19:31:25.877468623Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:31:25.877682 containerd[1908]: time="2025-02-13T19:31:25.877529172Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:31:25.877682 containerd[1908]: time="2025-02-13T19:31:25.877577182Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:31:25.877865 containerd[1908]: time="2025-02-13T19:31:25.877792457Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:31:25.877865 containerd[1908]: time="2025-02-13T19:31:25.877820247Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:31:25.877951 containerd[1908]: time="2025-02-13T19:31:25.877907741Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:31:25.877951 containerd[1908]: time="2025-02-13T19:31:25.877938130Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:31:25.880575 containerd[1908]: time="2025-02-13T19:31:25.878263647Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:31:25.880575 containerd[1908]: time="2025-02-13T19:31:25.878301587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:31:25.880575 containerd[1908]: time="2025-02-13T19:31:25.878326129Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:31:25.880575 containerd[1908]: time="2025-02-13T19:31:25.878342007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:31:25.880575 containerd[1908]: time="2025-02-13T19:31:25.878469375Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:31:25.880575 containerd[1908]: time="2025-02-13T19:31:25.878760112Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:31:25.880575 containerd[1908]: time="2025-02-13T19:31:25.879057503Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:31:25.880575 containerd[1908]: time="2025-02-13T19:31:25.879077893Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:31:25.880575 containerd[1908]: time="2025-02-13T19:31:25.879176812Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:31:25.880575 containerd[1908]: time="2025-02-13T19:31:25.879233331Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:31:25.887034 containerd[1908]: time="2025-02-13T19:31:25.886982313Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:31:25.890569 containerd[1908]: time="2025-02-13T19:31:25.887246064Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:31:25.890569 containerd[1908]: time="2025-02-13T19:31:25.888602552Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:31:25.890569 containerd[1908]: time="2025-02-13T19:31:25.888645878Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:31:25.890569 containerd[1908]: time="2025-02-13T19:31:25.888669513Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:31:25.890569 containerd[1908]: time="2025-02-13T19:31:25.888970479Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:31:25.890569 containerd[1908]: time="2025-02-13T19:31:25.890181699Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:31:25.890569 containerd[1908]: time="2025-02-13T19:31:25.890352589Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:31:25.890569 containerd[1908]: time="2025-02-13T19:31:25.890382137Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:31:25.890569 containerd[1908]: time="2025-02-13T19:31:25.890414286Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:31:25.890569 containerd[1908]: time="2025-02-13T19:31:25.890442395Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:31:25.890569 containerd[1908]: time="2025-02-13T19:31:25.890468078Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:31:25.890569 containerd[1908]: time="2025-02-13T19:31:25.890495366Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:31:25.890569 containerd[1908]: time="2025-02-13T19:31:25.890522153Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:31:25.899800 containerd[1908]: time="2025-02-13T19:31:25.899432917Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:31:25.899991 containerd[1908]: time="2025-02-13T19:31:25.899969241Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:31:25.901503 containerd[1908]: time="2025-02-13T19:31:25.901476398Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:31:25.901645 containerd[1908]: time="2025-02-13T19:31:25.901626514Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:31:25.903534 containerd[1908]: time="2025-02-13T19:31:25.902081421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:31:25.903534 containerd[1908]: time="2025-02-13T19:31:25.902114301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:31:25.903534 containerd[1908]: time="2025-02-13T19:31:25.902137346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:31:25.903534 containerd[1908]: time="2025-02-13T19:31:25.902159203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:31:25.903534 containerd[1908]: time="2025-02-13T19:31:25.902179372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:31:25.903534 containerd[1908]: time="2025-02-13T19:31:25.902210964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:31:25.903534 containerd[1908]: time="2025-02-13T19:31:25.902229385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:31:25.903534 containerd[1908]: time="2025-02-13T19:31:25.902249365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:31:25.903534 containerd[1908]: time="2025-02-13T19:31:25.902270251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:31:25.903534 containerd[1908]: time="2025-02-13T19:31:25.902294349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:31:25.903534 containerd[1908]: time="2025-02-13T19:31:25.902314659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:31:25.903534 containerd[1908]: time="2025-02-13T19:31:25.902332060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:31:25.903534 containerd[1908]: time="2025-02-13T19:31:25.902351341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:31:25.903534 containerd[1908]: time="2025-02-13T19:31:25.902373468Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:31:25.903534 containerd[1908]: time="2025-02-13T19:31:25.902408512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:31:25.904245 containerd[1908]: time="2025-02-13T19:31:25.902429528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:31:25.904245 containerd[1908]: time="2025-02-13T19:31:25.902446788Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:31:25.904558 containerd[1908]: time="2025-02-13T19:31:25.904349987Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:31:25.904558 containerd[1908]: time="2025-02-13T19:31:25.904471388Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:31:25.904558 containerd[1908]: time="2025-02-13T19:31:25.904492012Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:31:25.904558 containerd[1908]: time="2025-02-13T19:31:25.904511832Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:31:25.906258 containerd[1908]: time="2025-02-13T19:31:25.904734526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:31:25.906258 containerd[1908]: time="2025-02-13T19:31:25.904773589Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:31:25.906258 containerd[1908]: time="2025-02-13T19:31:25.904792285Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:31:25.906258 containerd[1908]: time="2025-02-13T19:31:25.904812695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:31:25.906437 sshd_keygen[1922]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:31:25.906709 containerd[1908]: time="2025-02-13T19:31:25.905233716Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:31:25.906709 containerd[1908]: time="2025-02-13T19:31:25.905303779Z" level=info msg="Connect containerd service" Feb 13 19:31:25.906709 containerd[1908]: time="2025-02-13T19:31:25.905356363Z" level=info msg="using legacy CRI server" Feb 13 19:31:25.906709 containerd[1908]: time="2025-02-13T19:31:25.905366388Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:31:25.907030 containerd[1908]: time="2025-02-13T19:31:25.907004438Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:31:25.908061 containerd[1908]: time="2025-02-13T19:31:25.908032195Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:31:25.908599 containerd[1908]: time="2025-02-13T19:31:25.908239111Z" level=info msg="Start subscribing containerd event" Feb 13 19:31:25.908599 containerd[1908]: time="2025-02-13T19:31:25.908293874Z" level=info msg="Start recovering state" Feb 13 19:31:25.908599 containerd[1908]: time="2025-02-13T19:31:25.908372506Z" level=info msg="Start event monitor" Feb 13 19:31:25.908599 containerd[1908]: time="2025-02-13T19:31:25.908393967Z" level=info msg="Start snapshots syncer" Feb 13 19:31:25.908599 containerd[1908]: time="2025-02-13T19:31:25.908406874Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:31:25.908599 containerd[1908]: time="2025-02-13T19:31:25.908418090Z" level=info msg="Start streaming server" Feb 13 19:31:25.910683 containerd[1908]: time="2025-02-13T19:31:25.910594383Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:31:25.910869 containerd[1908]: time="2025-02-13T19:31:25.910666921Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:31:25.911249 containerd[1908]: time="2025-02-13T19:31:25.910990452Z" level=info msg="containerd successfully booted in 0.150731s" Feb 13 19:31:25.911130 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:31:25.916569 amazon-ssm-agent[1981]: 2025-02-13 19:31:25 INFO Agent will take identity from EC2 Feb 13 19:31:25.972359 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:31:25.983690 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:31:26.001823 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:31:26.002140 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:31:26.012872 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:31:26.014955 amazon-ssm-agent[1981]: 2025-02-13 19:31:25 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:31:26.024629 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:31:26.032940 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:31:26.035952 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:31:26.037441 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:31:26.074533 amazon-ssm-agent[1981]: 2025-02-13 19:31:25 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:31:26.074533 amazon-ssm-agent[1981]: 2025-02-13 19:31:25 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:31:26.074533 amazon-ssm-agent[1981]: 2025-02-13 19:31:25 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 19:31:26.074533 amazon-ssm-agent[1981]: 2025-02-13 19:31:25 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Feb 13 19:31:26.074533 amazon-ssm-agent[1981]: 2025-02-13 19:31:25 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 19:31:26.074533 amazon-ssm-agent[1981]: 2025-02-13 19:31:25 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 19:31:26.074533 amazon-ssm-agent[1981]: 2025-02-13 19:31:25 INFO [Registrar] Starting registrar module Feb 13 19:31:26.074533 amazon-ssm-agent[1981]: 2025-02-13 19:31:25 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 19:31:26.074533 amazon-ssm-agent[1981]: 2025-02-13 19:31:26 INFO [EC2Identity] EC2 registration was successful. Feb 13 19:31:26.074533 amazon-ssm-agent[1981]: 2025-02-13 19:31:26 INFO [CredentialRefresher] credentialRefresher has started Feb 13 19:31:26.074533 amazon-ssm-agent[1981]: 2025-02-13 19:31:26 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 19:31:26.074533 amazon-ssm-agent[1981]: 2025-02-13 19:31:26 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 19:31:26.120196 amazon-ssm-agent[1981]: 2025-02-13 19:31:26 INFO [CredentialRefresher] Next credential rotation will be in 31.741661394666668 minutes Feb 13 19:31:27.087001 amazon-ssm-agent[1981]: 2025-02-13 19:31:27 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 19:31:27.187988 amazon-ssm-agent[1981]: 2025-02-13 19:31:27 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2118) started Feb 13 19:31:27.289519 amazon-ssm-agent[1981]: 2025-02-13 19:31:27 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 19:31:27.631219 ntpd[1885]: Listen normally on 6 eth0 [fe80::434:d3ff:fe96:841%2]:123 Feb 13 19:31:27.631897 ntpd[1885]: 13 Feb 19:31:27 ntpd[1885]: Listen normally on 6 eth0 [fe80::434:d3ff:fe96:841%2]:123 Feb 13 19:31:27.840863 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:31:27.846345 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:31:27.859843 systemd[1]: Startup finished in 766ms (kernel) + 7.763s (initrd) + 8.368s (userspace) = 16.898s. Feb 13 19:31:28.150296 (kubelet)[2133]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:31:28.957660 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:31:28.968515 systemd[1]: Started sshd@0-172.31.18.16:22-139.178.68.195:40110.service - OpenSSH per-connection server daemon (139.178.68.195:40110). Feb 13 19:31:29.198867 sshd[2143]: Accepted publickey for core from 139.178.68.195 port 40110 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:31:29.208445 sshd-session[2143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:29.234336 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:31:29.249138 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:31:29.273640 systemd-logind[1890]: New session 1 of user core. Feb 13 19:31:29.287807 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:31:29.300160 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:31:29.310356 (systemd)[2148]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:31:29.315156 systemd-logind[1890]: New session c1 of user core. Feb 13 19:31:29.590390 systemd[2148]: Queued start job for default target default.target. Feb 13 19:31:29.596351 systemd[2148]: Created slice app.slice - User Application Slice. Feb 13 19:31:29.596404 systemd[2148]: Reached target paths.target - Paths. Feb 13 19:31:29.597408 systemd[2148]: Reached target timers.target - Timers. Feb 13 19:31:29.601168 systemd[2148]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:31:29.621162 systemd[2148]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:31:29.621353 systemd[2148]: Reached target sockets.target - Sockets. Feb 13 19:31:29.621434 systemd[2148]: Reached target basic.target - Basic System. Feb 13 19:31:29.621495 systemd[2148]: Reached target default.target - Main User Target. Feb 13 19:31:29.621537 systemd[2148]: Startup finished in 295ms. Feb 13 19:31:29.621767 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:31:29.629838 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:31:29.639622 kubelet[2133]: E0213 19:31:29.635805 2133 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:31:29.640705 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:31:29.640916 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:31:29.641371 systemd[1]: kubelet.service: Consumed 1.034s CPU time, 256.4M memory peak. Feb 13 19:31:29.790092 systemd[1]: Started sshd@1-172.31.18.16:22-139.178.68.195:40124.service - OpenSSH per-connection server daemon (139.178.68.195:40124). Feb 13 19:31:29.981308 sshd[2160]: Accepted publickey for core from 139.178.68.195 port 40124 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:31:29.982991 sshd-session[2160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:30.017180 systemd-logind[1890]: New session 2 of user core. Feb 13 19:31:30.025674 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:31:30.181462 sshd[2162]: Connection closed by 139.178.68.195 port 40124 Feb 13 19:31:30.182249 sshd-session[2160]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:30.194418 systemd[1]: sshd@1-172.31.18.16:22-139.178.68.195:40124.service: Deactivated successfully. Feb 13 19:31:30.208037 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:31:30.214853 systemd-logind[1890]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:31:30.242987 systemd[1]: Started sshd@2-172.31.18.16:22-139.178.68.195:40136.service - OpenSSH per-connection server daemon (139.178.68.195:40136). Feb 13 19:31:30.245061 systemd-logind[1890]: Removed session 2. Feb 13 19:31:30.415999 sshd[2167]: Accepted publickey for core from 139.178.68.195 port 40136 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:31:30.417818 sshd-session[2167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:30.429386 systemd-logind[1890]: New session 3 of user core. Feb 13 19:31:30.435830 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:31:30.551731 sshd[2170]: Connection closed by 139.178.68.195 port 40136 Feb 13 19:31:30.552790 sshd-session[2167]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:30.557674 systemd[1]: sshd@2-172.31.18.16:22-139.178.68.195:40136.service: Deactivated successfully. Feb 13 19:31:30.559980 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:31:30.560858 systemd-logind[1890]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:31:30.562014 systemd-logind[1890]: Removed session 3. Feb 13 19:31:30.588997 systemd[1]: Started sshd@3-172.31.18.16:22-139.178.68.195:40146.service - OpenSSH per-connection server daemon (139.178.68.195:40146). Feb 13 19:31:30.747373 sshd[2176]: Accepted publickey for core from 139.178.68.195 port 40146 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:31:30.749300 sshd-session[2176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:30.756153 systemd-logind[1890]: New session 4 of user core. Feb 13 19:31:30.766267 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:31:30.887058 sshd[2178]: Connection closed by 139.178.68.195 port 40146 Feb 13 19:31:30.888697 sshd-session[2176]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:30.892763 systemd[1]: sshd@3-172.31.18.16:22-139.178.68.195:40146.service: Deactivated successfully. Feb 13 19:31:30.895408 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:31:30.897472 systemd-logind[1890]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:31:30.899302 systemd-logind[1890]: Removed session 4. Feb 13 19:31:30.929356 systemd[1]: Started sshd@4-172.31.18.16:22-139.178.68.195:40156.service - OpenSSH per-connection server daemon (139.178.68.195:40156). Feb 13 19:31:31.125717 sshd[2184]: Accepted publickey for core from 139.178.68.195 port 40156 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:31:31.126619 sshd-session[2184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:31.132591 systemd-logind[1890]: New session 5 of user core. Feb 13 19:31:31.143267 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:31:31.265027 sudo[2187]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:31:31.265443 sudo[2187]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:31:31.288736 sudo[2187]: pam_unix(sudo:session): session closed for user root Feb 13 19:31:31.319316 sshd[2186]: Connection closed by 139.178.68.195 port 40156 Feb 13 19:31:31.320903 sshd-session[2184]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:31.353045 systemd[1]: sshd@4-172.31.18.16:22-139.178.68.195:40156.service: Deactivated successfully. Feb 13 19:31:31.365877 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:31:31.367921 systemd-logind[1890]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:31:31.386095 systemd[1]: Started sshd@5-172.31.18.16:22-139.178.68.195:40172.service - OpenSSH per-connection server daemon (139.178.68.195:40172). Feb 13 19:31:31.388842 systemd-logind[1890]: Removed session 5. Feb 13 19:31:31.558277 sshd[2192]: Accepted publickey for core from 139.178.68.195 port 40172 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:31:31.559918 sshd-session[2192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:31.582475 systemd-logind[1890]: New session 6 of user core. Feb 13 19:31:31.592033 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:31:32.694358 systemd-resolved[1701]: Clock change detected. Flushing caches. Feb 13 19:31:32.771225 sudo[2197]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:31:32.771735 sudo[2197]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:31:32.777830 sudo[2197]: pam_unix(sudo:session): session closed for user root Feb 13 19:31:32.799991 sudo[2196]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:31:32.800444 sudo[2196]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:31:32.818746 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:31:32.866498 augenrules[2219]: No rules Feb 13 19:31:32.868060 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:31:32.868332 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:31:32.869932 sudo[2196]: pam_unix(sudo:session): session closed for user root Feb 13 19:31:32.892367 sshd[2195]: Connection closed by 139.178.68.195 port 40172 Feb 13 19:31:32.893674 sshd-session[2192]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:32.898563 systemd-logind[1890]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:31:32.899516 systemd[1]: sshd@5-172.31.18.16:22-139.178.68.195:40172.service: Deactivated successfully. Feb 13 19:31:32.901981 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:31:32.903271 systemd-logind[1890]: Removed session 6. Feb 13 19:31:32.937737 systemd[1]: Started sshd@6-172.31.18.16:22-139.178.68.195:40176.service - OpenSSH per-connection server daemon (139.178.68.195:40176). Feb 13 19:31:33.101108 sshd[2228]: Accepted publickey for core from 139.178.68.195 port 40176 ssh2: RSA SHA256:4RzVjHMKaBGo1AAj9gDxYrxfUwSqdSolqipQw4KjDPA Feb 13 19:31:33.102630 sshd-session[2228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:33.113117 systemd-logind[1890]: New session 7 of user core. Feb 13 19:31:33.121614 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:31:33.221267 sudo[2231]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:31:33.221668 sudo[2231]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:31:34.439100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:31:34.439790 systemd[1]: kubelet.service: Consumed 1.034s CPU time, 256.4M memory peak. Feb 13 19:31:34.447938 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:31:34.497978 systemd[1]: Reload requested from client PID 2264 ('systemctl') (unit session-7.scope)... Feb 13 19:31:34.498002 systemd[1]: Reloading... Feb 13 19:31:34.738376 zram_generator::config[2312]: No configuration found. Feb 13 19:31:34.909025 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:31:35.061730 systemd[1]: Reloading finished in 562 ms. Feb 13 19:31:35.156598 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:31:35.170520 (kubelet)[2359]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:31:35.174806 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:31:35.175536 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:31:35.176054 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:31:35.176157 systemd[1]: kubelet.service: Consumed 138ms CPU time, 91.8M memory peak. Feb 13 19:31:35.186968 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:31:35.517610 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:31:35.528340 (kubelet)[2372]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:31:35.628328 kubelet[2372]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:31:35.628328 kubelet[2372]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:31:35.628328 kubelet[2372]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:31:35.628328 kubelet[2372]: I0213 19:31:35.627208 2372 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:31:36.057674 kubelet[2372]: I0213 19:31:36.057626 2372 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:31:36.057674 kubelet[2372]: I0213 19:31:36.057660 2372 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:31:36.058041 kubelet[2372]: I0213 19:31:36.058017 2372 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:31:36.099199 kubelet[2372]: I0213 19:31:36.099009 2372 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:31:36.116756 kubelet[2372]: E0213 19:31:36.116697 2372 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:31:36.116756 kubelet[2372]: I0213 19:31:36.116745 2372 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:31:36.120042 kubelet[2372]: I0213 19:31:36.120001 2372 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:31:36.121275 kubelet[2372]: I0213 19:31:36.120251 2372 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:31:36.121275 kubelet[2372]: I0213 19:31:36.120307 2372 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.18.16","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:31:36.121275 kubelet[2372]: I0213 19:31:36.120705 2372 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:31:36.121275 kubelet[2372]: I0213 19:31:36.120722 2372 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:31:36.121541 kubelet[2372]: I0213 19:31:36.120888 2372 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:31:36.132514 kubelet[2372]: I0213 19:31:36.132417 2372 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:31:36.132514 kubelet[2372]: I0213 19:31:36.132465 2372 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:31:36.132514 kubelet[2372]: I0213 19:31:36.132494 2372 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:31:36.132514 kubelet[2372]: I0213 19:31:36.132508 2372 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:31:36.138009 kubelet[2372]: E0213 19:31:36.137134 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:36.138009 kubelet[2372]: E0213 19:31:36.137180 2372 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:36.139809 kubelet[2372]: I0213 19:31:36.139787 2372 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:31:36.140555 kubelet[2372]: I0213 19:31:36.140538 2372 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:31:36.140762 kubelet[2372]: W0213 19:31:36.140731 2372 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:31:36.146198 kubelet[2372]: I0213 19:31:36.144115 2372 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:31:36.146198 kubelet[2372]: I0213 19:31:36.144861 2372 server.go:1287] "Started kubelet" Feb 13 19:31:36.146198 kubelet[2372]: I0213 19:31:36.145177 2372 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:31:36.146198 kubelet[2372]: I0213 19:31:36.145406 2372 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:31:36.146198 kubelet[2372]: I0213 19:31:36.145846 2372 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:31:36.147752 kubelet[2372]: I0213 19:31:36.146885 2372 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:31:36.150523 kubelet[2372]: I0213 19:31:36.150496 2372 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:31:36.158326 kubelet[2372]: I0213 19:31:36.156127 2372 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:31:36.161197 kubelet[2372]: E0213 19:31:36.161160 2372 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.18.16\" not found" Feb 13 19:31:36.161197 kubelet[2372]: I0213 19:31:36.161203 2372 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:31:36.161504 kubelet[2372]: I0213 19:31:36.161484 2372 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:31:36.161556 kubelet[2372]: I0213 19:31:36.161543 2372 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:31:36.163086 kubelet[2372]: I0213 19:31:36.163058 2372 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:31:36.163212 kubelet[2372]: I0213 19:31:36.163189 2372 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:31:36.166667 kubelet[2372]: E0213 19:31:36.166637 2372 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:31:36.167186 kubelet[2372]: I0213 19:31:36.167162 2372 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:31:36.186282 kubelet[2372]: E0213 19:31:36.182134 2372 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.18.16.1823db62414f59ca default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.18.16,UID:172.31.18.16,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.18.16,},FirstTimestamp:2025-02-13 19:31:36.144140746 +0000 UTC m=+0.608878460,LastTimestamp:2025-02-13 19:31:36.144140746 +0000 UTC m=+0.608878460,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.18.16,}" Feb 13 19:31:36.188685 kubelet[2372]: W0213 19:31:36.188652 2372 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 19:31:36.188826 kubelet[2372]: E0213 19:31:36.188704 2372 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 19:31:36.188826 kubelet[2372]: W0213 19:31:36.188785 2372 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.18.16" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 19:31:36.188826 kubelet[2372]: E0213 19:31:36.188802 2372 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"172.31.18.16\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 19:31:36.189094 kubelet[2372]: W0213 19:31:36.189005 2372 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 19:31:36.189094 kubelet[2372]: E0213 19:31:36.189056 2372 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 13 19:31:36.191324 kubelet[2372]: E0213 19:31:36.189463 2372 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.18.16\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 13 19:31:36.193964 kubelet[2372]: I0213 19:31:36.193901 2372 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:31:36.193964 kubelet[2372]: I0213 19:31:36.193918 2372 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:31:36.193964 kubelet[2372]: I0213 19:31:36.193940 2372 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:31:36.201839 kubelet[2372]: I0213 19:31:36.201442 2372 policy_none.go:49] "None policy: Start" Feb 13 19:31:36.201839 kubelet[2372]: I0213 19:31:36.201474 2372 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:31:36.201839 kubelet[2372]: I0213 19:31:36.201493 2372 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:31:36.219028 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:31:36.234722 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:31:36.247803 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:31:36.257326 kubelet[2372]: I0213 19:31:36.256965 2372 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:31:36.257326 kubelet[2372]: I0213 19:31:36.257241 2372 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:31:36.263265 kubelet[2372]: I0213 19:31:36.257255 2372 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:31:36.263265 kubelet[2372]: I0213 19:31:36.259215 2372 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:31:36.266177 kubelet[2372]: E0213 19:31:36.264832 2372 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:31:36.266177 kubelet[2372]: E0213 19:31:36.264989 2372 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.18.16\" not found" Feb 13 19:31:36.285810 kubelet[2372]: I0213 19:31:36.285749 2372 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:31:36.288550 kubelet[2372]: I0213 19:31:36.288413 2372 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:31:36.290806 kubelet[2372]: I0213 19:31:36.288816 2372 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:31:36.290806 kubelet[2372]: I0213 19:31:36.288863 2372 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:31:36.290806 kubelet[2372]: I0213 19:31:36.288875 2372 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:31:36.290806 kubelet[2372]: E0213 19:31:36.288969 2372 kubelet.go:2412] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 13 19:31:36.360810 kubelet[2372]: I0213 19:31:36.360643 2372 kubelet_node_status.go:76] "Attempting to register node" node="172.31.18.16" Feb 13 19:31:36.374319 kubelet[2372]: I0213 19:31:36.372987 2372 kubelet_node_status.go:79] "Successfully registered node" node="172.31.18.16" Feb 13 19:31:36.374319 kubelet[2372]: E0213 19:31:36.373024 2372 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"172.31.18.16\": node \"172.31.18.16\" not found" Feb 13 19:31:36.389158 kubelet[2372]: E0213 19:31:36.389123 2372 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.18.16\" not found" Feb 13 19:31:36.489850 kubelet[2372]: E0213 19:31:36.489766 2372 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.18.16\" not found" Feb 13 19:31:36.572284 sudo[2231]: pam_unix(sudo:session): session closed for user root Feb 13 19:31:36.590092 kubelet[2372]: E0213 19:31:36.590040 2372 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.18.16\" not found" Feb 13 19:31:36.595205 sshd[2230]: Connection closed by 139.178.68.195 port 40176 Feb 13 19:31:36.595996 sshd-session[2228]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:36.599811 systemd[1]: sshd@6-172.31.18.16:22-139.178.68.195:40176.service: Deactivated successfully. Feb 13 19:31:36.602951 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:31:36.603199 systemd[1]: session-7.scope: Consumed 514ms CPU time, 74.3M memory peak. Feb 13 19:31:36.607285 systemd-logind[1890]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:31:36.609555 systemd-logind[1890]: Removed session 7. Feb 13 19:31:36.691358 kubelet[2372]: E0213 19:31:36.691185 2372 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.18.16\" not found" Feb 13 19:31:36.792205 kubelet[2372]: E0213 19:31:36.792151 2372 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.18.16\" not found" Feb 13 19:31:36.892958 kubelet[2372]: E0213 19:31:36.892903 2372 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.18.16\" not found" Feb 13 19:31:36.993808 kubelet[2372]: E0213 19:31:36.993666 2372 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.18.16\" not found" Feb 13 19:31:37.062016 kubelet[2372]: I0213 19:31:37.061963 2372 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 19:31:37.062220 kubelet[2372]: W0213 19:31:37.062194 2372 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 19:31:37.094537 kubelet[2372]: E0213 19:31:37.094474 2372 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.31.18.16\" not found" Feb 13 19:31:37.137392 kubelet[2372]: E0213 19:31:37.137320 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:37.195855 kubelet[2372]: I0213 19:31:37.195822 2372 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 19:31:37.196534 containerd[1908]: time="2025-02-13T19:31:37.196373841Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:31:37.197310 kubelet[2372]: I0213 19:31:37.197154 2372 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 19:31:38.137476 kubelet[2372]: I0213 19:31:38.137431 2372 apiserver.go:52] "Watching apiserver" Feb 13 19:31:38.138040 kubelet[2372]: E0213 19:31:38.137421 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:38.165886 kubelet[2372]: I0213 19:31:38.165820 2372 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:31:38.174135 systemd[1]: Created slice kubepods-burstable-pod3daaac13_930b_48fc_a743_9e5b15729f18.slice - libcontainer container kubepods-burstable-pod3daaac13_930b_48fc_a743_9e5b15729f18.slice. Feb 13 19:31:38.185330 kubelet[2372]: I0213 19:31:38.184491 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ff06d874-572f-4829-a6e2-07f77492d164-kube-proxy\") pod \"kube-proxy-wrxgx\" (UID: \"ff06d874-572f-4829-a6e2-07f77492d164\") " pod="kube-system/kube-proxy-wrxgx" Feb 13 19:31:38.185330 kubelet[2372]: I0213 19:31:38.184592 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3daaac13-930b-48fc-a743-9e5b15729f18-host-proc-sys-net\") pod \"cilium-rbprw\" (UID: \"3daaac13-930b-48fc-a743-9e5b15729f18\") " pod="kube-system/cilium-rbprw" Feb 13 19:31:38.185330 kubelet[2372]: I0213 19:31:38.184674 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3daaac13-930b-48fc-a743-9e5b15729f18-lib-modules\") pod \"cilium-rbprw\" (UID: \"3daaac13-930b-48fc-a743-9e5b15729f18\") " pod="kube-system/cilium-rbprw" Feb 13 19:31:38.185330 kubelet[2372]: I0213 19:31:38.184704 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3daaac13-930b-48fc-a743-9e5b15729f18-clustermesh-secrets\") pod \"cilium-rbprw\" (UID: \"3daaac13-930b-48fc-a743-9e5b15729f18\") " pod="kube-system/cilium-rbprw" Feb 13 19:31:38.185330 kubelet[2372]: I0213 19:31:38.184770 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3daaac13-930b-48fc-a743-9e5b15729f18-cilium-config-path\") pod \"cilium-rbprw\" (UID: \"3daaac13-930b-48fc-a743-9e5b15729f18\") " pod="kube-system/cilium-rbprw" Feb 13 19:31:38.185330 kubelet[2372]: I0213 19:31:38.184795 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3daaac13-930b-48fc-a743-9e5b15729f18-hostproc\") pod \"cilium-rbprw\" (UID: \"3daaac13-930b-48fc-a743-9e5b15729f18\") " pod="kube-system/cilium-rbprw" Feb 13 19:31:38.185714 kubelet[2372]: I0213 19:31:38.185007 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3daaac13-930b-48fc-a743-9e5b15729f18-cni-path\") pod \"cilium-rbprw\" (UID: \"3daaac13-930b-48fc-a743-9e5b15729f18\") " pod="kube-system/cilium-rbprw" Feb 13 19:31:38.185714 kubelet[2372]: I0213 19:31:38.185057 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3daaac13-930b-48fc-a743-9e5b15729f18-xtables-lock\") pod \"cilium-rbprw\" (UID: \"3daaac13-930b-48fc-a743-9e5b15729f18\") " pod="kube-system/cilium-rbprw" Feb 13 19:31:38.185714 kubelet[2372]: I0213 19:31:38.185082 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68xpj\" (UniqueName: \"kubernetes.io/projected/3daaac13-930b-48fc-a743-9e5b15729f18-kube-api-access-68xpj\") pod \"cilium-rbprw\" (UID: \"3daaac13-930b-48fc-a743-9e5b15729f18\") " pod="kube-system/cilium-rbprw" Feb 13 19:31:38.185714 kubelet[2372]: I0213 19:31:38.185105 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3daaac13-930b-48fc-a743-9e5b15729f18-cilium-run\") pod \"cilium-rbprw\" (UID: \"3daaac13-930b-48fc-a743-9e5b15729f18\") " pod="kube-system/cilium-rbprw" Feb 13 19:31:38.185714 kubelet[2372]: I0213 19:31:38.185128 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3daaac13-930b-48fc-a743-9e5b15729f18-cilium-cgroup\") pod \"cilium-rbprw\" (UID: \"3daaac13-930b-48fc-a743-9e5b15729f18\") " pod="kube-system/cilium-rbprw" Feb 13 19:31:38.185714 kubelet[2372]: I0213 19:31:38.185155 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3daaac13-930b-48fc-a743-9e5b15729f18-etc-cni-netd\") pod \"cilium-rbprw\" (UID: \"3daaac13-930b-48fc-a743-9e5b15729f18\") " pod="kube-system/cilium-rbprw" Feb 13 19:31:38.185952 kubelet[2372]: I0213 19:31:38.185178 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3daaac13-930b-48fc-a743-9e5b15729f18-host-proc-sys-kernel\") pod \"cilium-rbprw\" (UID: \"3daaac13-930b-48fc-a743-9e5b15729f18\") " pod="kube-system/cilium-rbprw" Feb 13 19:31:38.185952 kubelet[2372]: I0213 19:31:38.185204 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3daaac13-930b-48fc-a743-9e5b15729f18-hubble-tls\") pod \"cilium-rbprw\" (UID: \"3daaac13-930b-48fc-a743-9e5b15729f18\") " pod="kube-system/cilium-rbprw" Feb 13 19:31:38.185952 kubelet[2372]: I0213 19:31:38.185282 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff06d874-572f-4829-a6e2-07f77492d164-xtables-lock\") pod \"kube-proxy-wrxgx\" (UID: \"ff06d874-572f-4829-a6e2-07f77492d164\") " pod="kube-system/kube-proxy-wrxgx" Feb 13 19:31:38.185952 kubelet[2372]: I0213 19:31:38.185326 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff06d874-572f-4829-a6e2-07f77492d164-lib-modules\") pod \"kube-proxy-wrxgx\" (UID: \"ff06d874-572f-4829-a6e2-07f77492d164\") " pod="kube-system/kube-proxy-wrxgx" Feb 13 19:31:38.185952 kubelet[2372]: I0213 19:31:38.185349 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l28gw\" (UniqueName: \"kubernetes.io/projected/ff06d874-572f-4829-a6e2-07f77492d164-kube-api-access-l28gw\") pod \"kube-proxy-wrxgx\" (UID: \"ff06d874-572f-4829-a6e2-07f77492d164\") " pod="kube-system/kube-proxy-wrxgx" Feb 13 19:31:38.186144 kubelet[2372]: I0213 19:31:38.185372 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3daaac13-930b-48fc-a743-9e5b15729f18-bpf-maps\") pod \"cilium-rbprw\" (UID: \"3daaac13-930b-48fc-a743-9e5b15729f18\") " pod="kube-system/cilium-rbprw" Feb 13 19:31:38.199773 systemd[1]: Created slice kubepods-besteffort-podff06d874_572f_4829_a6e2_07f77492d164.slice - libcontainer container kubepods-besteffort-podff06d874_572f_4829_a6e2_07f77492d164.slice. Feb 13 19:31:38.498805 containerd[1908]: time="2025-02-13T19:31:38.498639613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rbprw,Uid:3daaac13-930b-48fc-a743-9e5b15729f18,Namespace:kube-system,Attempt:0,}" Feb 13 19:31:38.522090 containerd[1908]: time="2025-02-13T19:31:38.522028481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wrxgx,Uid:ff06d874-572f-4829-a6e2-07f77492d164,Namespace:kube-system,Attempt:0,}" Feb 13 19:31:39.091697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2327533431.mount: Deactivated successfully. Feb 13 19:31:39.102275 containerd[1908]: time="2025-02-13T19:31:39.102217083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:31:39.104540 containerd[1908]: time="2025-02-13T19:31:39.104490009Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:31:39.106014 containerd[1908]: time="2025-02-13T19:31:39.105916625Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 19:31:39.109567 containerd[1908]: time="2025-02-13T19:31:39.109457426Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:31:39.109567 containerd[1908]: time="2025-02-13T19:31:39.109472244Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:31:39.117334 containerd[1908]: time="2025-02-13T19:31:39.114977836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:31:39.120703 containerd[1908]: time="2025-02-13T19:31:39.119480388Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 620.698849ms" Feb 13 19:31:39.128451 containerd[1908]: time="2025-02-13T19:31:39.128396984Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 606.251783ms" Feb 13 19:31:39.138831 kubelet[2372]: E0213 19:31:39.138393 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:39.435024 containerd[1908]: time="2025-02-13T19:31:39.431139854Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:31:39.435024 containerd[1908]: time="2025-02-13T19:31:39.434737862Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:31:39.435024 containerd[1908]: time="2025-02-13T19:31:39.434757640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:31:39.435024 containerd[1908]: time="2025-02-13T19:31:39.434879622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:31:39.446426 containerd[1908]: time="2025-02-13T19:31:39.445809790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:31:39.446426 containerd[1908]: time="2025-02-13T19:31:39.445903905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:31:39.446426 containerd[1908]: time="2025-02-13T19:31:39.445927751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:31:39.454628 containerd[1908]: time="2025-02-13T19:31:39.448718628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:31:39.536550 systemd[1]: Started cri-containerd-9f8d84155a12f465cffe905ba3f4fbc944f47d0181d7fc39a2e5c6248e1b2c10.scope - libcontainer container 9f8d84155a12f465cffe905ba3f4fbc944f47d0181d7fc39a2e5c6248e1b2c10. Feb 13 19:31:39.545463 systemd[1]: Started cri-containerd-568531edf1e0abbe4bcf1d81ec798b4ff4db25aeefb6f1d81f7dc869fb03ea2f.scope - libcontainer container 568531edf1e0abbe4bcf1d81ec798b4ff4db25aeefb6f1d81f7dc869fb03ea2f. Feb 13 19:31:39.596334 containerd[1908]: time="2025-02-13T19:31:39.596275188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wrxgx,Uid:ff06d874-572f-4829-a6e2-07f77492d164,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f8d84155a12f465cffe905ba3f4fbc944f47d0181d7fc39a2e5c6248e1b2c10\"" Feb 13 19:31:39.599988 containerd[1908]: time="2025-02-13T19:31:39.599850569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rbprw,Uid:3daaac13-930b-48fc-a743-9e5b15729f18,Namespace:kube-system,Attempt:0,} returns sandbox id \"568531edf1e0abbe4bcf1d81ec798b4ff4db25aeefb6f1d81f7dc869fb03ea2f\"" Feb 13 19:31:39.606684 containerd[1908]: time="2025-02-13T19:31:39.606635899Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 19:31:40.139494 kubelet[2372]: E0213 19:31:40.139437 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:41.042454 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount568928245.mount: Deactivated successfully. Feb 13 19:31:41.140242 kubelet[2372]: E0213 19:31:41.140085 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:41.668721 containerd[1908]: time="2025-02-13T19:31:41.668664664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:31:41.669904 containerd[1908]: time="2025-02-13T19:31:41.669731833Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=30908839" Feb 13 19:31:41.672084 containerd[1908]: time="2025-02-13T19:31:41.670974848Z" level=info msg="ImageCreate event name:\"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:31:41.673185 containerd[1908]: time="2025-02-13T19:31:41.673112750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:31:41.676089 containerd[1908]: time="2025-02-13T19:31:41.676030885Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"30907858\" in 2.06934868s" Feb 13 19:31:41.676209 containerd[1908]: time="2025-02-13T19:31:41.676094928Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\"" Feb 13 19:31:41.681035 containerd[1908]: time="2025-02-13T19:31:41.680992406Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 19:31:41.682175 containerd[1908]: time="2025-02-13T19:31:41.682131074Z" level=info msg="CreateContainer within sandbox \"9f8d84155a12f465cffe905ba3f4fbc944f47d0181d7fc39a2e5c6248e1b2c10\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:31:41.704573 containerd[1908]: time="2025-02-13T19:31:41.704522620Z" level=info msg="CreateContainer within sandbox \"9f8d84155a12f465cffe905ba3f4fbc944f47d0181d7fc39a2e5c6248e1b2c10\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"af048458ba1d095449028c9a273a0963ae6e48c7f72a9c09e5e356cef818c535\"" Feb 13 19:31:41.707189 containerd[1908]: time="2025-02-13T19:31:41.705399218Z" level=info msg="StartContainer for \"af048458ba1d095449028c9a273a0963ae6e48c7f72a9c09e5e356cef818c535\"" Feb 13 19:31:41.750750 systemd[1]: Started cri-containerd-af048458ba1d095449028c9a273a0963ae6e48c7f72a9c09e5e356cef818c535.scope - libcontainer container af048458ba1d095449028c9a273a0963ae6e48c7f72a9c09e5e356cef818c535. Feb 13 19:31:41.791646 containerd[1908]: time="2025-02-13T19:31:41.791572188Z" level=info msg="StartContainer for \"af048458ba1d095449028c9a273a0963ae6e48c7f72a9c09e5e356cef818c535\" returns successfully" Feb 13 19:31:42.142442 kubelet[2372]: E0213 19:31:42.140271 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:42.356724 kubelet[2372]: I0213 19:31:42.356640 2372 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wrxgx" podStartSLOduration=4.282518373 podStartE2EDuration="6.356622357s" podCreationTimestamp="2025-02-13 19:31:36 +0000 UTC" firstStartedPulling="2025-02-13 19:31:39.605986239 +0000 UTC m=+4.070723938" lastFinishedPulling="2025-02-13 19:31:41.680090214 +0000 UTC m=+6.144827922" observedRunningTime="2025-02-13 19:31:42.356403188 +0000 UTC m=+6.821140907" watchObservedRunningTime="2025-02-13 19:31:42.356622357 +0000 UTC m=+6.821360074" Feb 13 19:31:43.143954 kubelet[2372]: E0213 19:31:43.143889 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:44.145320 kubelet[2372]: E0213 19:31:44.145208 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:45.145653 kubelet[2372]: E0213 19:31:45.145617 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:46.147566 kubelet[2372]: E0213 19:31:46.147316 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:47.148968 kubelet[2372]: E0213 19:31:47.148924 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:47.583180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4205548739.mount: Deactivated successfully. Feb 13 19:31:48.149125 kubelet[2372]: E0213 19:31:48.149061 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:49.149984 kubelet[2372]: E0213 19:31:49.149920 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:50.150629 kubelet[2372]: E0213 19:31:50.150560 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:50.308838 containerd[1908]: time="2025-02-13T19:31:50.308782838Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:31:50.310342 containerd[1908]: time="2025-02-13T19:31:50.310159586Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 13 19:31:50.311637 containerd[1908]: time="2025-02-13T19:31:50.311343115Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:31:50.313038 containerd[1908]: time="2025-02-13T19:31:50.313001230Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.631154534s" Feb 13 19:31:50.313172 containerd[1908]: time="2025-02-13T19:31:50.313151830Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 19:31:50.315774 containerd[1908]: time="2025-02-13T19:31:50.315741348Z" level=info msg="CreateContainer within sandbox \"568531edf1e0abbe4bcf1d81ec798b4ff4db25aeefb6f1d81f7dc869fb03ea2f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:31:50.332088 containerd[1908]: time="2025-02-13T19:31:50.331925642Z" level=info msg="CreateContainer within sandbox \"568531edf1e0abbe4bcf1d81ec798b4ff4db25aeefb6f1d81f7dc869fb03ea2f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"302eba125a6877fb8befa2a2bec11e8028b6a8a176d2d1c2e11caa4ae1d5c317\"" Feb 13 19:31:50.333926 containerd[1908]: time="2025-02-13T19:31:50.332972669Z" level=info msg="StartContainer for \"302eba125a6877fb8befa2a2bec11e8028b6a8a176d2d1c2e11caa4ae1d5c317\"" Feb 13 19:31:50.376451 systemd[1]: run-containerd-runc-k8s.io-302eba125a6877fb8befa2a2bec11e8028b6a8a176d2d1c2e11caa4ae1d5c317-runc.gRYxBZ.mount: Deactivated successfully. Feb 13 19:31:50.389555 systemd[1]: Started cri-containerd-302eba125a6877fb8befa2a2bec11e8028b6a8a176d2d1c2e11caa4ae1d5c317.scope - libcontainer container 302eba125a6877fb8befa2a2bec11e8028b6a8a176d2d1c2e11caa4ae1d5c317. Feb 13 19:31:50.427406 containerd[1908]: time="2025-02-13T19:31:50.427154869Z" level=info msg="StartContainer for \"302eba125a6877fb8befa2a2bec11e8028b6a8a176d2d1c2e11caa4ae1d5c317\" returns successfully" Feb 13 19:31:50.445855 systemd[1]: cri-containerd-302eba125a6877fb8befa2a2bec11e8028b6a8a176d2d1c2e11caa4ae1d5c317.scope: Deactivated successfully. Feb 13 19:31:50.685202 containerd[1908]: time="2025-02-13T19:31:50.684820406Z" level=info msg="shim disconnected" id=302eba125a6877fb8befa2a2bec11e8028b6a8a176d2d1c2e11caa4ae1d5c317 namespace=k8s.io Feb 13 19:31:50.685202 containerd[1908]: time="2025-02-13T19:31:50.684889147Z" level=warning msg="cleaning up after shim disconnected" id=302eba125a6877fb8befa2a2bec11e8028b6a8a176d2d1c2e11caa4ae1d5c317 namespace=k8s.io Feb 13 19:31:50.685202 containerd[1908]: time="2025-02-13T19:31:50.684901325Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:31:51.150708 kubelet[2372]: E0213 19:31:51.150644 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:51.325284 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-302eba125a6877fb8befa2a2bec11e8028b6a8a176d2d1c2e11caa4ae1d5c317-rootfs.mount: Deactivated successfully. Feb 13 19:31:51.391609 containerd[1908]: time="2025-02-13T19:31:51.391320421Z" level=info msg="CreateContainer within sandbox \"568531edf1e0abbe4bcf1d81ec798b4ff4db25aeefb6f1d81f7dc869fb03ea2f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:31:51.419949 containerd[1908]: time="2025-02-13T19:31:51.419839780Z" level=info msg="CreateContainer within sandbox \"568531edf1e0abbe4bcf1d81ec798b4ff4db25aeefb6f1d81f7dc869fb03ea2f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b0035e5fed5e8f4e7403436223a486e4eef4685240f2588cbb52c14502f3384d\"" Feb 13 19:31:51.420800 containerd[1908]: time="2025-02-13T19:31:51.420764334Z" level=info msg="StartContainer for \"b0035e5fed5e8f4e7403436223a486e4eef4685240f2588cbb52c14502f3384d\"" Feb 13 19:31:51.469842 systemd[1]: Started cri-containerd-b0035e5fed5e8f4e7403436223a486e4eef4685240f2588cbb52c14502f3384d.scope - libcontainer container b0035e5fed5e8f4e7403436223a486e4eef4685240f2588cbb52c14502f3384d. Feb 13 19:31:51.533251 containerd[1908]: time="2025-02-13T19:31:51.533073249Z" level=info msg="StartContainer for \"b0035e5fed5e8f4e7403436223a486e4eef4685240f2588cbb52c14502f3384d\" returns successfully" Feb 13 19:31:51.545021 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:31:51.546566 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:31:51.546995 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:31:51.554511 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:31:51.560289 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 19:31:51.561102 systemd[1]: cri-containerd-b0035e5fed5e8f4e7403436223a486e4eef4685240f2588cbb52c14502f3384d.scope: Deactivated successfully. Feb 13 19:31:51.599564 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:31:51.610138 containerd[1908]: time="2025-02-13T19:31:51.610065154Z" level=info msg="shim disconnected" id=b0035e5fed5e8f4e7403436223a486e4eef4685240f2588cbb52c14502f3384d namespace=k8s.io Feb 13 19:31:51.610138 containerd[1908]: time="2025-02-13T19:31:51.610130383Z" level=warning msg="cleaning up after shim disconnected" id=b0035e5fed5e8f4e7403436223a486e4eef4685240f2588cbb52c14502f3384d namespace=k8s.io Feb 13 19:31:51.610138 containerd[1908]: time="2025-02-13T19:31:51.610142540Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:31:52.151341 kubelet[2372]: E0213 19:31:52.151267 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:52.326040 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0035e5fed5e8f4e7403436223a486e4eef4685240f2588cbb52c14502f3384d-rootfs.mount: Deactivated successfully. Feb 13 19:31:52.395387 containerd[1908]: time="2025-02-13T19:31:52.395177686Z" level=info msg="CreateContainer within sandbox \"568531edf1e0abbe4bcf1d81ec798b4ff4db25aeefb6f1d81f7dc869fb03ea2f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:31:52.425066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount53216898.mount: Deactivated successfully. Feb 13 19:31:52.426414 containerd[1908]: time="2025-02-13T19:31:52.426238967Z" level=info msg="CreateContainer within sandbox \"568531edf1e0abbe4bcf1d81ec798b4ff4db25aeefb6f1d81f7dc869fb03ea2f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d9580974b388e4cafcffbc3c894c809b1262ce977087d4a5ed2d80d2f05bd5e8\"" Feb 13 19:31:52.427653 containerd[1908]: time="2025-02-13T19:31:52.427618743Z" level=info msg="StartContainer for \"d9580974b388e4cafcffbc3c894c809b1262ce977087d4a5ed2d80d2f05bd5e8\"" Feb 13 19:31:52.483970 systemd[1]: Started cri-containerd-d9580974b388e4cafcffbc3c894c809b1262ce977087d4a5ed2d80d2f05bd5e8.scope - libcontainer container d9580974b388e4cafcffbc3c894c809b1262ce977087d4a5ed2d80d2f05bd5e8. Feb 13 19:31:52.539523 containerd[1908]: time="2025-02-13T19:31:52.539471060Z" level=info msg="StartContainer for \"d9580974b388e4cafcffbc3c894c809b1262ce977087d4a5ed2d80d2f05bd5e8\" returns successfully" Feb 13 19:31:52.541272 systemd[1]: cri-containerd-d9580974b388e4cafcffbc3c894c809b1262ce977087d4a5ed2d80d2f05bd5e8.scope: Deactivated successfully. Feb 13 19:31:52.578312 containerd[1908]: time="2025-02-13T19:31:52.578210861Z" level=info msg="shim disconnected" id=d9580974b388e4cafcffbc3c894c809b1262ce977087d4a5ed2d80d2f05bd5e8 namespace=k8s.io Feb 13 19:31:52.578312 containerd[1908]: time="2025-02-13T19:31:52.578276041Z" level=warning msg="cleaning up after shim disconnected" id=d9580974b388e4cafcffbc3c894c809b1262ce977087d4a5ed2d80d2f05bd5e8 namespace=k8s.io Feb 13 19:31:52.578312 containerd[1908]: time="2025-02-13T19:31:52.578289898Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:31:53.151572 kubelet[2372]: E0213 19:31:53.151490 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:53.326501 systemd[1]: run-containerd-runc-k8s.io-d9580974b388e4cafcffbc3c894c809b1262ce977087d4a5ed2d80d2f05bd5e8-runc.r8ph8W.mount: Deactivated successfully. Feb 13 19:31:53.326654 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9580974b388e4cafcffbc3c894c809b1262ce977087d4a5ed2d80d2f05bd5e8-rootfs.mount: Deactivated successfully. Feb 13 19:31:53.442054 containerd[1908]: time="2025-02-13T19:31:53.440950447Z" level=info msg="CreateContainer within sandbox \"568531edf1e0abbe4bcf1d81ec798b4ff4db25aeefb6f1d81f7dc869fb03ea2f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:31:53.488472 containerd[1908]: time="2025-02-13T19:31:53.488422906Z" level=info msg="CreateContainer within sandbox \"568531edf1e0abbe4bcf1d81ec798b4ff4db25aeefb6f1d81f7dc869fb03ea2f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"05a2e127f2ea7f75752a7448c91e51104193a060c73c53fc342712acbaef7081\"" Feb 13 19:31:53.489219 containerd[1908]: time="2025-02-13T19:31:53.489182659Z" level=info msg="StartContainer for \"05a2e127f2ea7f75752a7448c91e51104193a060c73c53fc342712acbaef7081\"" Feb 13 19:31:53.555538 systemd[1]: Started cri-containerd-05a2e127f2ea7f75752a7448c91e51104193a060c73c53fc342712acbaef7081.scope - libcontainer container 05a2e127f2ea7f75752a7448c91e51104193a060c73c53fc342712acbaef7081. Feb 13 19:31:53.587381 systemd[1]: cri-containerd-05a2e127f2ea7f75752a7448c91e51104193a060c73c53fc342712acbaef7081.scope: Deactivated successfully. Feb 13 19:31:53.589742 containerd[1908]: time="2025-02-13T19:31:53.589688704Z" level=info msg="StartContainer for \"05a2e127f2ea7f75752a7448c91e51104193a060c73c53fc342712acbaef7081\" returns successfully" Feb 13 19:31:53.615719 containerd[1908]: time="2025-02-13T19:31:53.615650317Z" level=info msg="shim disconnected" id=05a2e127f2ea7f75752a7448c91e51104193a060c73c53fc342712acbaef7081 namespace=k8s.io Feb 13 19:31:53.615719 containerd[1908]: time="2025-02-13T19:31:53.615712564Z" level=warning msg="cleaning up after shim disconnected" id=05a2e127f2ea7f75752a7448c91e51104193a060c73c53fc342712acbaef7081 namespace=k8s.io Feb 13 19:31:53.615719 containerd[1908]: time="2025-02-13T19:31:53.615724398Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:31:54.152466 kubelet[2372]: E0213 19:31:54.152271 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:54.326043 systemd[1]: run-containerd-runc-k8s.io-05a2e127f2ea7f75752a7448c91e51104193a060c73c53fc342712acbaef7081-runc.o2lAg7.mount: Deactivated successfully. Feb 13 19:31:54.326188 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05a2e127f2ea7f75752a7448c91e51104193a060c73c53fc342712acbaef7081-rootfs.mount: Deactivated successfully. Feb 13 19:31:54.436693 containerd[1908]: time="2025-02-13T19:31:54.436249169Z" level=info msg="CreateContainer within sandbox \"568531edf1e0abbe4bcf1d81ec798b4ff4db25aeefb6f1d81f7dc869fb03ea2f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:31:54.484115 containerd[1908]: time="2025-02-13T19:31:54.484046519Z" level=info msg="CreateContainer within sandbox \"568531edf1e0abbe4bcf1d81ec798b4ff4db25aeefb6f1d81f7dc869fb03ea2f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"daceadd967cd673b018b96d80655fd12ac3e688ba5f389778f40866c1f98e246\"" Feb 13 19:31:54.486152 containerd[1908]: time="2025-02-13T19:31:54.486107296Z" level=info msg="StartContainer for \"daceadd967cd673b018b96d80655fd12ac3e688ba5f389778f40866c1f98e246\"" Feb 13 19:31:54.528773 systemd[1]: Started cri-containerd-daceadd967cd673b018b96d80655fd12ac3e688ba5f389778f40866c1f98e246.scope - libcontainer container daceadd967cd673b018b96d80655fd12ac3e688ba5f389778f40866c1f98e246. Feb 13 19:31:54.570735 containerd[1908]: time="2025-02-13T19:31:54.570678439Z" level=info msg="StartContainer for \"daceadd967cd673b018b96d80655fd12ac3e688ba5f389778f40866c1f98e246\" returns successfully" Feb 13 19:31:54.696727 kubelet[2372]: I0213 19:31:54.695196 2372 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 19:31:55.073335 kernel: Initializing XFRM netlink socket Feb 13 19:31:55.152948 kubelet[2372]: E0213 19:31:55.152870 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:56.133228 kubelet[2372]: E0213 19:31:56.133166 2372 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:56.153347 kubelet[2372]: E0213 19:31:56.153280 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:56.722830 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 19:31:56.753701 systemd-networkd[1748]: cilium_host: Link UP Feb 13 19:31:56.755389 systemd-networkd[1748]: cilium_net: Link UP Feb 13 19:31:56.755626 systemd-networkd[1748]: cilium_net: Gained carrier Feb 13 19:31:56.755822 systemd-networkd[1748]: cilium_host: Gained carrier Feb 13 19:31:56.766363 (udev-worker)[3026]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:31:56.768835 (udev-worker)[3028]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:31:56.912403 (udev-worker)[3087]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:31:56.926128 systemd-networkd[1748]: cilium_vxlan: Link UP Feb 13 19:31:56.926144 systemd-networkd[1748]: cilium_vxlan: Gained carrier Feb 13 19:31:56.982582 systemd-networkd[1748]: cilium_host: Gained IPv6LL Feb 13 19:31:57.054638 systemd-networkd[1748]: cilium_net: Gained IPv6LL Feb 13 19:31:57.155336 kubelet[2372]: E0213 19:31:57.153413 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:57.260356 kernel: NET: Registered PF_ALG protocol family Feb 13 19:31:58.049447 systemd-networkd[1748]: lxc_health: Link UP Feb 13 19:31:58.053238 systemd-networkd[1748]: lxc_health: Gained carrier Feb 13 19:31:58.154687 kubelet[2372]: E0213 19:31:58.154613 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:58.536691 kubelet[2372]: I0213 19:31:58.536504 2372 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rbprw" podStartSLOduration=11.829429265 podStartE2EDuration="22.536467058s" podCreationTimestamp="2025-02-13 19:31:36 +0000 UTC" firstStartedPulling="2025-02-13 19:31:39.607207816 +0000 UTC m=+4.071945511" lastFinishedPulling="2025-02-13 19:31:50.314245606 +0000 UTC m=+14.778983304" observedRunningTime="2025-02-13 19:31:55.512054233 +0000 UTC m=+19.976791951" watchObservedRunningTime="2025-02-13 19:31:58.536467058 +0000 UTC m=+23.001204757" Feb 13 19:31:58.886529 systemd-networkd[1748]: cilium_vxlan: Gained IPv6LL Feb 13 19:31:59.155769 kubelet[2372]: E0213 19:31:59.155555 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:31:59.718570 systemd-networkd[1748]: lxc_health: Gained IPv6LL Feb 13 19:32:00.156963 kubelet[2372]: E0213 19:32:00.156153 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:00.520282 systemd[1]: Created slice kubepods-besteffort-pod6ca4ff8e_10e6_482b_bef1_4be09da1f02e.slice - libcontainer container kubepods-besteffort-pod6ca4ff8e_10e6_482b_bef1_4be09da1f02e.slice. Feb 13 19:32:00.581008 kubelet[2372]: I0213 19:32:00.580951 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4c4s\" (UniqueName: \"kubernetes.io/projected/6ca4ff8e-10e6-482b-bef1-4be09da1f02e-kube-api-access-n4c4s\") pod \"nginx-deployment-7fcdb87857-72nc4\" (UID: \"6ca4ff8e-10e6-482b-bef1-4be09da1f02e\") " pod="default/nginx-deployment-7fcdb87857-72nc4" Feb 13 19:32:00.825769 containerd[1908]: time="2025-02-13T19:32:00.825622325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-72nc4,Uid:6ca4ff8e-10e6-482b-bef1-4be09da1f02e,Namespace:default,Attempt:0,}" Feb 13 19:32:00.916183 systemd-networkd[1748]: lxcf52711a9ab94: Link UP Feb 13 19:32:00.920810 kernel: eth0: renamed from tmp369ba Feb 13 19:32:00.927637 systemd-networkd[1748]: lxcf52711a9ab94: Gained carrier Feb 13 19:32:01.157433 kubelet[2372]: E0213 19:32:01.156734 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:02.157779 kubelet[2372]: E0213 19:32:02.157702 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:02.537407 systemd-networkd[1748]: lxcf52711a9ab94: Gained IPv6LL Feb 13 19:32:03.158950 kubelet[2372]: E0213 19:32:03.158826 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:04.159604 kubelet[2372]: E0213 19:32:04.159526 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:04.699133 ntpd[1885]: Listen normally on 7 cilium_host 192.168.1.102:123 Feb 13 19:32:04.701616 ntpd[1885]: 13 Feb 19:32:04 ntpd[1885]: Listen normally on 7 cilium_host 192.168.1.102:123 Feb 13 19:32:04.701616 ntpd[1885]: 13 Feb 19:32:04 ntpd[1885]: Listen normally on 8 cilium_net [fe80::a83b:2aff:fe0c:8c2a%3]:123 Feb 13 19:32:04.701616 ntpd[1885]: 13 Feb 19:32:04 ntpd[1885]: Listen normally on 9 cilium_host [fe80::4cc8:87ff:fe54:b912%4]:123 Feb 13 19:32:04.701616 ntpd[1885]: 13 Feb 19:32:04 ntpd[1885]: Listen normally on 10 cilium_vxlan [fe80::f0dc:e9ff:fecc:451a%5]:123 Feb 13 19:32:04.701616 ntpd[1885]: 13 Feb 19:32:04 ntpd[1885]: Listen normally on 11 lxc_health [fe80::ec3c:f2ff:fea1:e4f%7]:123 Feb 13 19:32:04.701616 ntpd[1885]: 13 Feb 19:32:04 ntpd[1885]: Listen normally on 12 lxcf52711a9ab94 [fe80::98ca:fcff:fe1e:5825%9]:123 Feb 13 19:32:04.700831 ntpd[1885]: Listen normally on 8 cilium_net [fe80::a83b:2aff:fe0c:8c2a%3]:123 Feb 13 19:32:04.700918 ntpd[1885]: Listen normally on 9 cilium_host [fe80::4cc8:87ff:fe54:b912%4]:123 Feb 13 19:32:04.700964 ntpd[1885]: Listen normally on 10 cilium_vxlan [fe80::f0dc:e9ff:fecc:451a%5]:123 Feb 13 19:32:04.701007 ntpd[1885]: Listen normally on 11 lxc_health [fe80::ec3c:f2ff:fea1:e4f%7]:123 Feb 13 19:32:04.701049 ntpd[1885]: Listen normally on 12 lxcf52711a9ab94 [fe80::98ca:fcff:fe1e:5825%9]:123 Feb 13 19:32:04.980250 containerd[1908]: time="2025-02-13T19:32:04.979795046Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:32:04.981063 containerd[1908]: time="2025-02-13T19:32:04.980670150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:32:04.981511 containerd[1908]: time="2025-02-13T19:32:04.981459546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:32:04.981818 containerd[1908]: time="2025-02-13T19:32:04.981759725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:32:05.041455 systemd[1]: run-containerd-runc-k8s.io-369bac4cda70b0f8520845071520083653ede7167376d38a360b25d0b2cb399e-runc.y39JS9.mount: Deactivated successfully. Feb 13 19:32:05.049557 systemd[1]: Started cri-containerd-369bac4cda70b0f8520845071520083653ede7167376d38a360b25d0b2cb399e.scope - libcontainer container 369bac4cda70b0f8520845071520083653ede7167376d38a360b25d0b2cb399e. Feb 13 19:32:05.119936 containerd[1908]: time="2025-02-13T19:32:05.119892966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-72nc4,Uid:6ca4ff8e-10e6-482b-bef1-4be09da1f02e,Namespace:default,Attempt:0,} returns sandbox id \"369bac4cda70b0f8520845071520083653ede7167376d38a360b25d0b2cb399e\"" Feb 13 19:32:05.122055 containerd[1908]: time="2025-02-13T19:32:05.121778663Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:32:05.160767 kubelet[2372]: E0213 19:32:05.160713 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:06.161472 kubelet[2372]: E0213 19:32:06.161352 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:07.161558 kubelet[2372]: E0213 19:32:07.161480 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:08.084579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount353017417.mount: Deactivated successfully. Feb 13 19:32:08.162317 kubelet[2372]: E0213 19:32:08.161809 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:09.162386 kubelet[2372]: E0213 19:32:09.162341 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:09.777233 containerd[1908]: time="2025-02-13T19:32:09.777172265Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:32:09.782466 containerd[1908]: time="2025-02-13T19:32:09.782173448Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73054493" Feb 13 19:32:09.786323 containerd[1908]: time="2025-02-13T19:32:09.784894779Z" level=info msg="ImageCreate event name:\"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:32:09.792111 containerd[1908]: time="2025-02-13T19:32:09.792063989Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:32:09.793496 containerd[1908]: time="2025-02-13T19:32:09.793458315Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 4.671625548s" Feb 13 19:32:09.793643 containerd[1908]: time="2025-02-13T19:32:09.793623311Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 19:32:09.796727 containerd[1908]: time="2025-02-13T19:32:09.796690795Z" level=info msg="CreateContainer within sandbox \"369bac4cda70b0f8520845071520083653ede7167376d38a360b25d0b2cb399e\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 19:32:09.815770 containerd[1908]: time="2025-02-13T19:32:09.815723505Z" level=info msg="CreateContainer within sandbox \"369bac4cda70b0f8520845071520083653ede7167376d38a360b25d0b2cb399e\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"4e5dbb1ed3d73749d3b45e6a327629b7e58474d1f27a7b2c10c03714cba2dff6\"" Feb 13 19:32:09.816836 containerd[1908]: time="2025-02-13T19:32:09.816803987Z" level=info msg="StartContainer for \"4e5dbb1ed3d73749d3b45e6a327629b7e58474d1f27a7b2c10c03714cba2dff6\"" Feb 13 19:32:09.868621 systemd[1]: Started cri-containerd-4e5dbb1ed3d73749d3b45e6a327629b7e58474d1f27a7b2c10c03714cba2dff6.scope - libcontainer container 4e5dbb1ed3d73749d3b45e6a327629b7e58474d1f27a7b2c10c03714cba2dff6. Feb 13 19:32:09.924202 containerd[1908]: time="2025-02-13T19:32:09.924036025Z" level=info msg="StartContainer for \"4e5dbb1ed3d73749d3b45e6a327629b7e58474d1f27a7b2c10c03714cba2dff6\" returns successfully" Feb 13 19:32:10.163601 kubelet[2372]: E0213 19:32:10.163455 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:11.026538 update_engine[1891]: I20250213 19:32:11.026371 1891 update_attempter.cc:509] Updating boot flags... Feb 13 19:32:11.138461 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3569) Feb 13 19:32:11.164059 kubelet[2372]: E0213 19:32:11.164004 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:11.382393 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3569) Feb 13 19:32:12.164449 kubelet[2372]: E0213 19:32:12.164396 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:13.165405 kubelet[2372]: E0213 19:32:13.165321 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:14.166258 kubelet[2372]: E0213 19:32:14.166193 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:14.620927 kubelet[2372]: I0213 19:32:14.620851 2372 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-72nc4" podStartSLOduration=9.947135041 podStartE2EDuration="14.620826074s" podCreationTimestamp="2025-02-13 19:32:00 +0000 UTC" firstStartedPulling="2025-02-13 19:32:05.121395813 +0000 UTC m=+29.586133512" lastFinishedPulling="2025-02-13 19:32:09.79508683 +0000 UTC m=+34.259824545" observedRunningTime="2025-02-13 19:32:10.588032258 +0000 UTC m=+35.052769981" watchObservedRunningTime="2025-02-13 19:32:14.620826074 +0000 UTC m=+39.085563804" Feb 13 19:32:14.649016 systemd[1]: Created slice kubepods-besteffort-podb7719648_158c_4109_a73a_540d9f455b7f.slice - libcontainer container kubepods-besteffort-podb7719648_158c_4109_a73a_540d9f455b7f.slice. Feb 13 19:32:14.693732 kubelet[2372]: I0213 19:32:14.693654 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/b7719648-158c-4109-a73a-540d9f455b7f-data\") pod \"nfs-server-provisioner-0\" (UID: \"b7719648-158c-4109-a73a-540d9f455b7f\") " pod="default/nfs-server-provisioner-0" Feb 13 19:32:14.693732 kubelet[2372]: I0213 19:32:14.693719 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4mbk\" (UniqueName: \"kubernetes.io/projected/b7719648-158c-4109-a73a-540d9f455b7f-kube-api-access-b4mbk\") pod \"nfs-server-provisioner-0\" (UID: \"b7719648-158c-4109-a73a-540d9f455b7f\") " pod="default/nfs-server-provisioner-0" Feb 13 19:32:14.954673 containerd[1908]: time="2025-02-13T19:32:14.953318140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:b7719648-158c-4109-a73a-540d9f455b7f,Namespace:default,Attempt:0,}" Feb 13 19:32:15.080685 systemd-networkd[1748]: lxc89b07690ab5b: Link UP Feb 13 19:32:15.082072 (udev-worker)[3560]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:32:15.083328 kernel: eth0: renamed from tmp68f7e Feb 13 19:32:15.087822 systemd-networkd[1748]: lxc89b07690ab5b: Gained carrier Feb 13 19:32:15.167280 kubelet[2372]: E0213 19:32:15.167202 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:15.282680 containerd[1908]: time="2025-02-13T19:32:15.282461951Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:32:15.282680 containerd[1908]: time="2025-02-13T19:32:15.282558601Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:32:15.282680 containerd[1908]: time="2025-02-13T19:32:15.282576777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:32:15.283138 containerd[1908]: time="2025-02-13T19:32:15.283042805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:32:15.314514 systemd[1]: Started cri-containerd-68f7ebe6403e47e613110432193f8535fb712b75760498b4ec9f05db1c3195c0.scope - libcontainer container 68f7ebe6403e47e613110432193f8535fb712b75760498b4ec9f05db1c3195c0. Feb 13 19:32:15.362470 containerd[1908]: time="2025-02-13T19:32:15.362323852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:b7719648-158c-4109-a73a-540d9f455b7f,Namespace:default,Attempt:0,} returns sandbox id \"68f7ebe6403e47e613110432193f8535fb712b75760498b4ec9f05db1c3195c0\"" Feb 13 19:32:15.364675 containerd[1908]: time="2025-02-13T19:32:15.364630329Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 19:32:16.134477 kubelet[2372]: E0213 19:32:16.134409 2372 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:16.168079 kubelet[2372]: E0213 19:32:16.168039 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:16.296632 systemd-networkd[1748]: lxc89b07690ab5b: Gained IPv6LL Feb 13 19:32:17.169334 kubelet[2372]: E0213 19:32:17.169278 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:17.773210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1548342854.mount: Deactivated successfully. Feb 13 19:32:18.170736 kubelet[2372]: E0213 19:32:18.170611 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:18.693986 ntpd[1885]: Listen normally on 13 lxc89b07690ab5b [fe80::cc34:f4ff:fe43:b0d7%11]:123 Feb 13 19:32:18.694755 ntpd[1885]: 13 Feb 19:32:18 ntpd[1885]: Listen normally on 13 lxc89b07690ab5b [fe80::cc34:f4ff:fe43:b0d7%11]:123 Feb 13 19:32:19.171810 kubelet[2372]: E0213 19:32:19.171734 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:19.849441 containerd[1908]: time="2025-02-13T19:32:19.849387877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:32:19.852172 containerd[1908]: time="2025-02-13T19:32:19.851993226Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Feb 13 19:32:19.853535 containerd[1908]: time="2025-02-13T19:32:19.853170473Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:32:19.855997 containerd[1908]: time="2025-02-13T19:32:19.855960067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:32:19.857071 containerd[1908]: time="2025-02-13T19:32:19.857032739Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.492360485s" Feb 13 19:32:19.857159 containerd[1908]: time="2025-02-13T19:32:19.857079129Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 13 19:32:19.859726 containerd[1908]: time="2025-02-13T19:32:19.859695698Z" level=info msg="CreateContainer within sandbox \"68f7ebe6403e47e613110432193f8535fb712b75760498b4ec9f05db1c3195c0\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 19:32:19.875661 containerd[1908]: time="2025-02-13T19:32:19.875615080Z" level=info msg="CreateContainer within sandbox \"68f7ebe6403e47e613110432193f8535fb712b75760498b4ec9f05db1c3195c0\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"52add08e6c033eae5c17ff40b9ac92d655284a46d31e7b050c54e7e2677f303b\"" Feb 13 19:32:19.876278 containerd[1908]: time="2025-02-13T19:32:19.876221179Z" level=info msg="StartContainer for \"52add08e6c033eae5c17ff40b9ac92d655284a46d31e7b050c54e7e2677f303b\"" Feb 13 19:32:19.914504 systemd[1]: Started cri-containerd-52add08e6c033eae5c17ff40b9ac92d655284a46d31e7b050c54e7e2677f303b.scope - libcontainer container 52add08e6c033eae5c17ff40b9ac92d655284a46d31e7b050c54e7e2677f303b. Feb 13 19:32:19.945942 containerd[1908]: time="2025-02-13T19:32:19.945864682Z" level=info msg="StartContainer for \"52add08e6c033eae5c17ff40b9ac92d655284a46d31e7b050c54e7e2677f303b\" returns successfully" Feb 13 19:32:20.172966 kubelet[2372]: E0213 19:32:20.172823 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:20.638049 kubelet[2372]: I0213 19:32:20.637982 2372 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.143758379 podStartE2EDuration="6.637955666s" podCreationTimestamp="2025-02-13 19:32:14 +0000 UTC" firstStartedPulling="2025-02-13 19:32:15.364051678 +0000 UTC m=+39.828789373" lastFinishedPulling="2025-02-13 19:32:19.858248958 +0000 UTC m=+44.322986660" observedRunningTime="2025-02-13 19:32:20.636468226 +0000 UTC m=+45.101205965" watchObservedRunningTime="2025-02-13 19:32:20.637955666 +0000 UTC m=+45.102693383" Feb 13 19:32:21.173951 kubelet[2372]: E0213 19:32:21.173883 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:22.174803 kubelet[2372]: E0213 19:32:22.174749 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:23.175665 kubelet[2372]: E0213 19:32:23.175599 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:24.176487 kubelet[2372]: E0213 19:32:24.176421 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:25.176677 kubelet[2372]: E0213 19:32:25.176619 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:26.176997 kubelet[2372]: E0213 19:32:26.176932 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:27.177782 kubelet[2372]: E0213 19:32:27.177722 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:28.178560 kubelet[2372]: E0213 19:32:28.178488 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:29.179154 kubelet[2372]: E0213 19:32:29.179090 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:30.180342 kubelet[2372]: E0213 19:32:30.180264 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:30.235514 systemd[1]: Created slice kubepods-besteffort-pod00518d0a_8f60_4e63_a2dc_3687bf372651.slice - libcontainer container kubepods-besteffort-pod00518d0a_8f60_4e63_a2dc_3687bf372651.slice. Feb 13 19:32:30.300575 kubelet[2372]: I0213 19:32:30.300467 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8588de32-936c-4695-90c5-83fd52482c14\" (UniqueName: \"kubernetes.io/nfs/00518d0a-8f60-4e63-a2dc-3687bf372651-pvc-8588de32-936c-4695-90c5-83fd52482c14\") pod \"test-pod-1\" (UID: \"00518d0a-8f60-4e63-a2dc-3687bf372651\") " pod="default/test-pod-1" Feb 13 19:32:30.300575 kubelet[2372]: I0213 19:32:30.300520 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-862nx\" (UniqueName: \"kubernetes.io/projected/00518d0a-8f60-4e63-a2dc-3687bf372651-kube-api-access-862nx\") pod \"test-pod-1\" (UID: \"00518d0a-8f60-4e63-a2dc-3687bf372651\") " pod="default/test-pod-1" Feb 13 19:32:30.450385 kernel: FS-Cache: Loaded Feb 13 19:32:30.550754 kernel: RPC: Registered named UNIX socket transport module. Feb 13 19:32:30.550952 kernel: RPC: Registered udp transport module. Feb 13 19:32:30.551038 kernel: RPC: Registered tcp transport module. Feb 13 19:32:30.551071 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 19:32:30.551101 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 19:32:30.806406 kernel: NFS: Registering the id_resolver key type Feb 13 19:32:30.806531 kernel: Key type id_resolver registered Feb 13 19:32:30.806565 kernel: Key type id_legacy registered Feb 13 19:32:30.842767 nfsidmap[3922]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 19:32:30.846120 nfsidmap[3923]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 19:32:31.140551 containerd[1908]: time="2025-02-13T19:32:31.140261144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:00518d0a-8f60-4e63-a2dc-3687bf372651,Namespace:default,Attempt:0,}" Feb 13 19:32:31.181234 kubelet[2372]: E0213 19:32:31.181146 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:31.197318 systemd-networkd[1748]: lxca05b841708ed: Link UP Feb 13 19:32:31.198315 kernel: eth0: renamed from tmpb3c2a Feb 13 19:32:31.205527 systemd-networkd[1748]: lxca05b841708ed: Gained carrier Feb 13 19:32:31.322010 (udev-worker)[3912]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:32:31.804981 containerd[1908]: time="2025-02-13T19:32:31.804652549Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:32:31.805366 containerd[1908]: time="2025-02-13T19:32:31.804999961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:32:31.805366 containerd[1908]: time="2025-02-13T19:32:31.805035524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:32:31.805939 containerd[1908]: time="2025-02-13T19:32:31.805783452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:32:31.841557 systemd[1]: Started cri-containerd-b3c2a22338b8913c3005188aa5fa4e1572b0a5f44af39cb961ab6e24c68c9c62.scope - libcontainer container b3c2a22338b8913c3005188aa5fa4e1572b0a5f44af39cb961ab6e24c68c9c62. Feb 13 19:32:31.961315 containerd[1908]: time="2025-02-13T19:32:31.961256144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:00518d0a-8f60-4e63-a2dc-3687bf372651,Namespace:default,Attempt:0,} returns sandbox id \"b3c2a22338b8913c3005188aa5fa4e1572b0a5f44af39cb961ab6e24c68c9c62\"" Feb 13 19:32:31.962846 containerd[1908]: time="2025-02-13T19:32:31.962812755Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:32:32.181791 kubelet[2372]: E0213 19:32:32.181639 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:32.337525 containerd[1908]: time="2025-02-13T19:32:32.337463524Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:32:32.338764 containerd[1908]: time="2025-02-13T19:32:32.338720159Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 19:32:32.341437 containerd[1908]: time="2025-02-13T19:32:32.341392039Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 378.543167ms" Feb 13 19:32:32.341437 containerd[1908]: time="2025-02-13T19:32:32.341426918Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 19:32:32.343688 containerd[1908]: time="2025-02-13T19:32:32.343657931Z" level=info msg="CreateContainer within sandbox \"b3c2a22338b8913c3005188aa5fa4e1572b0a5f44af39cb961ab6e24c68c9c62\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 19:32:32.367401 containerd[1908]: time="2025-02-13T19:32:32.367356138Z" level=info msg="CreateContainer within sandbox \"b3c2a22338b8913c3005188aa5fa4e1572b0a5f44af39cb961ab6e24c68c9c62\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"65ccc4a87e7385d72ef3e4ef6dab02a423576c2061146b06e4233d43df2c24c1\"" Feb 13 19:32:32.368399 containerd[1908]: time="2025-02-13T19:32:32.368367750Z" level=info msg="StartContainer for \"65ccc4a87e7385d72ef3e4ef6dab02a423576c2061146b06e4233d43df2c24c1\"" Feb 13 19:32:32.402201 systemd[1]: Started cri-containerd-65ccc4a87e7385d72ef3e4ef6dab02a423576c2061146b06e4233d43df2c24c1.scope - libcontainer container 65ccc4a87e7385d72ef3e4ef6dab02a423576c2061146b06e4233d43df2c24c1. Feb 13 19:32:32.438933 containerd[1908]: time="2025-02-13T19:32:32.438518126Z" level=info msg="StartContainer for \"65ccc4a87e7385d72ef3e4ef6dab02a423576c2061146b06e4233d43df2c24c1\" returns successfully" Feb 13 19:32:32.638132 kubelet[2372]: I0213 19:32:32.638042 2372 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.258073478 podStartE2EDuration="17.637994696s" podCreationTimestamp="2025-02-13 19:32:15 +0000 UTC" firstStartedPulling="2025-02-13 19:32:31.962236044 +0000 UTC m=+56.426973743" lastFinishedPulling="2025-02-13 19:32:32.342157261 +0000 UTC m=+56.806894961" observedRunningTime="2025-02-13 19:32:32.637752582 +0000 UTC m=+57.102490300" watchObservedRunningTime="2025-02-13 19:32:32.637994696 +0000 UTC m=+57.102732412" Feb 13 19:32:33.182634 kubelet[2372]: E0213 19:32:33.182571 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:33.190542 systemd-networkd[1748]: lxca05b841708ed: Gained IPv6LL Feb 13 19:32:34.183152 kubelet[2372]: E0213 19:32:34.183091 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:35.183718 kubelet[2372]: E0213 19:32:35.183658 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:35.693968 ntpd[1885]: Listen normally on 14 lxca05b841708ed [fe80::f069:88ff:fecf:9bb6%13]:123 Feb 13 19:32:35.694660 ntpd[1885]: 13 Feb 19:32:35 ntpd[1885]: Listen normally on 14 lxca05b841708ed [fe80::f069:88ff:fecf:9bb6%13]:123 Feb 13 19:32:36.133069 kubelet[2372]: E0213 19:32:36.133008 2372 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:36.187567 kubelet[2372]: E0213 19:32:36.184453 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:37.184921 kubelet[2372]: E0213 19:32:37.184860 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:38.185457 kubelet[2372]: E0213 19:32:38.185399 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:39.186057 kubelet[2372]: E0213 19:32:39.185983 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:40.186762 kubelet[2372]: E0213 19:32:40.186651 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:41.141917 containerd[1908]: time="2025-02-13T19:32:41.141863162Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:32:41.186961 kubelet[2372]: E0213 19:32:41.186909 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:41.209562 containerd[1908]: time="2025-02-13T19:32:41.209493012Z" level=info msg="StopContainer for \"daceadd967cd673b018b96d80655fd12ac3e688ba5f389778f40866c1f98e246\" with timeout 2 (s)" Feb 13 19:32:41.210047 containerd[1908]: time="2025-02-13T19:32:41.210015758Z" level=info msg="Stop container \"daceadd967cd673b018b96d80655fd12ac3e688ba5f389778f40866c1f98e246\" with signal terminated" Feb 13 19:32:41.219584 systemd-networkd[1748]: lxc_health: Link DOWN Feb 13 19:32:41.219594 systemd-networkd[1748]: lxc_health: Lost carrier Feb 13 19:32:41.246175 systemd[1]: cri-containerd-daceadd967cd673b018b96d80655fd12ac3e688ba5f389778f40866c1f98e246.scope: Deactivated successfully. Feb 13 19:32:41.246586 systemd[1]: cri-containerd-daceadd967cd673b018b96d80655fd12ac3e688ba5f389778f40866c1f98e246.scope: Consumed 7.998s CPU time, 119.5M memory peak, 792K read from disk, 13.3M written to disk. Feb 13 19:32:41.293977 kubelet[2372]: E0213 19:32:41.290963 2372 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:32:41.311804 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-daceadd967cd673b018b96d80655fd12ac3e688ba5f389778f40866c1f98e246-rootfs.mount: Deactivated successfully. Feb 13 19:32:41.344249 containerd[1908]: time="2025-02-13T19:32:41.321055839Z" level=info msg="shim disconnected" id=daceadd967cd673b018b96d80655fd12ac3e688ba5f389778f40866c1f98e246 namespace=k8s.io Feb 13 19:32:41.344535 containerd[1908]: time="2025-02-13T19:32:41.344252184Z" level=warning msg="cleaning up after shim disconnected" id=daceadd967cd673b018b96d80655fd12ac3e688ba5f389778f40866c1f98e246 namespace=k8s.io Feb 13 19:32:41.344535 containerd[1908]: time="2025-02-13T19:32:41.344277328Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:32:41.365018 containerd[1908]: time="2025-02-13T19:32:41.364845840Z" level=info msg="StopContainer for \"daceadd967cd673b018b96d80655fd12ac3e688ba5f389778f40866c1f98e246\" returns successfully" Feb 13 19:32:41.370224 containerd[1908]: time="2025-02-13T19:32:41.370170782Z" level=info msg="StopPodSandbox for \"568531edf1e0abbe4bcf1d81ec798b4ff4db25aeefb6f1d81f7dc869fb03ea2f\"" Feb 13 19:32:41.375906 containerd[1908]: time="2025-02-13T19:32:41.370241867Z" level=info msg="Container to stop \"b0035e5fed5e8f4e7403436223a486e4eef4685240f2588cbb52c14502f3384d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:32:41.375906 containerd[1908]: time="2025-02-13T19:32:41.375900149Z" level=info msg="Container to stop \"d9580974b388e4cafcffbc3c894c809b1262ce977087d4a5ed2d80d2f05bd5e8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:32:41.375906 containerd[1908]: time="2025-02-13T19:32:41.375917246Z" level=info msg="Container to stop \"302eba125a6877fb8befa2a2bec11e8028b6a8a176d2d1c2e11caa4ae1d5c317\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:32:41.376147 containerd[1908]: time="2025-02-13T19:32:41.375938725Z" level=info msg="Container to stop \"05a2e127f2ea7f75752a7448c91e51104193a060c73c53fc342712acbaef7081\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:32:41.376147 containerd[1908]: time="2025-02-13T19:32:41.375953077Z" level=info msg="Container to stop \"daceadd967cd673b018b96d80655fd12ac3e688ba5f389778f40866c1f98e246\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:32:41.380538 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-568531edf1e0abbe4bcf1d81ec798b4ff4db25aeefb6f1d81f7dc869fb03ea2f-shm.mount: Deactivated successfully. Feb 13 19:32:41.386539 systemd[1]: cri-containerd-568531edf1e0abbe4bcf1d81ec798b4ff4db25aeefb6f1d81f7dc869fb03ea2f.scope: Deactivated successfully. Feb 13 19:32:41.412032 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-568531edf1e0abbe4bcf1d81ec798b4ff4db25aeefb6f1d81f7dc869fb03ea2f-rootfs.mount: Deactivated successfully. Feb 13 19:32:41.418056 containerd[1908]: time="2025-02-13T19:32:41.417985352Z" level=info msg="shim disconnected" id=568531edf1e0abbe4bcf1d81ec798b4ff4db25aeefb6f1d81f7dc869fb03ea2f namespace=k8s.io Feb 13 19:32:41.418056 containerd[1908]: time="2025-02-13T19:32:41.418053633Z" level=warning msg="cleaning up after shim disconnected" id=568531edf1e0abbe4bcf1d81ec798b4ff4db25aeefb6f1d81f7dc869fb03ea2f namespace=k8s.io Feb 13 19:32:41.420455 containerd[1908]: time="2025-02-13T19:32:41.418066245Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:32:41.434813 containerd[1908]: time="2025-02-13T19:32:41.434759498Z" level=info msg="TearDown network for sandbox \"568531edf1e0abbe4bcf1d81ec798b4ff4db25aeefb6f1d81f7dc869fb03ea2f\" successfully" Feb 13 19:32:41.434813 containerd[1908]: time="2025-02-13T19:32:41.434799121Z" level=info msg="StopPodSandbox for \"568531edf1e0abbe4bcf1d81ec798b4ff4db25aeefb6f1d81f7dc869fb03ea2f\" returns successfully" Feb 13 19:32:41.503747 kubelet[2372]: I0213 19:32:41.503699 2372 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3daaac13-930b-48fc-a743-9e5b15729f18-bpf-maps\") pod \"3daaac13-930b-48fc-a743-9e5b15729f18\" (UID: \"3daaac13-930b-48fc-a743-9e5b15729f18\") " Feb 13 19:32:41.503975 kubelet[2372]: I0213 19:32:41.503766 2372 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3daaac13-930b-48fc-a743-9e5b15729f18-clustermesh-secrets\") pod \"3daaac13-930b-48fc-a743-9e5b15729f18\" (UID: \"3daaac13-930b-48fc-a743-9e5b15729f18\") " Feb 13 19:32:41.503975 kubelet[2372]: I0213 19:32:41.503799 2372 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68xpj\" (UniqueName: \"kubernetes.io/projected/3daaac13-930b-48fc-a743-9e5b15729f18-kube-api-access-68xpj\") pod \"3daaac13-930b-48fc-a743-9e5b15729f18\" (UID: \"3daaac13-930b-48fc-a743-9e5b15729f18\") " Feb 13 19:32:41.503975 kubelet[2372]: I0213 19:32:41.503828 2372 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3daaac13-930b-48fc-a743-9e5b15729f18-cilium-cgroup\") pod \"3daaac13-930b-48fc-a743-9e5b15729f18\" (UID: \"3daaac13-930b-48fc-a743-9e5b15729f18\") " Feb 13 19:32:41.503975 kubelet[2372]: I0213 19:32:41.503857 2372 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3daaac13-930b-48fc-a743-9e5b15729f18-lib-modules\") pod \"3daaac13-930b-48fc-a743-9e5b15729f18\" (UID: \"3daaac13-930b-48fc-a743-9e5b15729f18\") " Feb 13 19:32:41.503975 kubelet[2372]: I0213 19:32:41.503883 2372 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3daaac13-930b-48fc-a743-9e5b15729f18-cilium-config-path\") pod \"3daaac13-930b-48fc-a743-9e5b15729f18\" (UID: \"3daaac13-930b-48fc-a743-9e5b15729f18\") " Feb 13 19:32:41.503975 kubelet[2372]: I0213 19:32:41.503904 2372 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3daaac13-930b-48fc-a743-9e5b15729f18-cni-path\") pod \"3daaac13-930b-48fc-a743-9e5b15729f18\" (UID: \"3daaac13-930b-48fc-a743-9e5b15729f18\") " Feb 13 19:32:41.504248 kubelet[2372]: I0213 19:32:41.503927 2372 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3daaac13-930b-48fc-a743-9e5b15729f18-xtables-lock\") pod \"3daaac13-930b-48fc-a743-9e5b15729f18\" (UID: \"3daaac13-930b-48fc-a743-9e5b15729f18\") " Feb 13 19:32:41.504248 kubelet[2372]: I0213 19:32:41.503950 2372 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3daaac13-930b-48fc-a743-9e5b15729f18-cilium-run\") pod \"3daaac13-930b-48fc-a743-9e5b15729f18\" (UID: \"3daaac13-930b-48fc-a743-9e5b15729f18\") " Feb 13 19:32:41.504248 kubelet[2372]: I0213 19:32:41.503976 2372 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3daaac13-930b-48fc-a743-9e5b15729f18-host-proc-sys-net\") pod \"3daaac13-930b-48fc-a743-9e5b15729f18\" (UID: \"3daaac13-930b-48fc-a743-9e5b15729f18\") " Feb 13 19:32:41.504248 kubelet[2372]: I0213 19:32:41.503999 2372 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3daaac13-930b-48fc-a743-9e5b15729f18-hostproc\") pod \"3daaac13-930b-48fc-a743-9e5b15729f18\" (UID: \"3daaac13-930b-48fc-a743-9e5b15729f18\") " Feb 13 19:32:41.504248 kubelet[2372]: I0213 19:32:41.504022 2372 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3daaac13-930b-48fc-a743-9e5b15729f18-etc-cni-netd\") pod \"3daaac13-930b-48fc-a743-9e5b15729f18\" (UID: \"3daaac13-930b-48fc-a743-9e5b15729f18\") " Feb 13 19:32:41.504248 kubelet[2372]: I0213 19:32:41.504051 2372 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3daaac13-930b-48fc-a743-9e5b15729f18-host-proc-sys-kernel\") pod \"3daaac13-930b-48fc-a743-9e5b15729f18\" (UID: \"3daaac13-930b-48fc-a743-9e5b15729f18\") " Feb 13 19:32:41.504508 kubelet[2372]: I0213 19:32:41.504084 2372 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3daaac13-930b-48fc-a743-9e5b15729f18-hubble-tls\") pod \"3daaac13-930b-48fc-a743-9e5b15729f18\" (UID: \"3daaac13-930b-48fc-a743-9e5b15729f18\") " Feb 13 19:32:41.506332 kubelet[2372]: I0213 19:32:41.504650 2372 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3daaac13-930b-48fc-a743-9e5b15729f18-cni-path" (OuterVolumeSpecName: "cni-path") pod "3daaac13-930b-48fc-a743-9e5b15729f18" (UID: "3daaac13-930b-48fc-a743-9e5b15729f18"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:32:41.506332 kubelet[2372]: I0213 19:32:41.504720 2372 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3daaac13-930b-48fc-a743-9e5b15729f18-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3daaac13-930b-48fc-a743-9e5b15729f18" (UID: "3daaac13-930b-48fc-a743-9e5b15729f18"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:32:41.507887 kubelet[2372]: I0213 19:32:41.507845 2372 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3daaac13-930b-48fc-a743-9e5b15729f18-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3daaac13-930b-48fc-a743-9e5b15729f18" (UID: "3daaac13-930b-48fc-a743-9e5b15729f18"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:32:41.508089 kubelet[2372]: I0213 19:32:41.508057 2372 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3daaac13-930b-48fc-a743-9e5b15729f18-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3daaac13-930b-48fc-a743-9e5b15729f18" (UID: "3daaac13-930b-48fc-a743-9e5b15729f18"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:32:41.508270 kubelet[2372]: I0213 19:32:41.508210 2372 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3daaac13-930b-48fc-a743-9e5b15729f18-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3daaac13-930b-48fc-a743-9e5b15729f18" (UID: "3daaac13-930b-48fc-a743-9e5b15729f18"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:32:41.508428 kubelet[2372]: I0213 19:32:41.508410 2372 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3daaac13-930b-48fc-a743-9e5b15729f18-hostproc" (OuterVolumeSpecName: "hostproc") pod "3daaac13-930b-48fc-a743-9e5b15729f18" (UID: "3daaac13-930b-48fc-a743-9e5b15729f18"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:32:41.508561 kubelet[2372]: I0213 19:32:41.508540 2372 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3daaac13-930b-48fc-a743-9e5b15729f18-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3daaac13-930b-48fc-a743-9e5b15729f18" (UID: "3daaac13-930b-48fc-a743-9e5b15729f18"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:32:41.508688 kubelet[2372]: I0213 19:32:41.508672 2372 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3daaac13-930b-48fc-a743-9e5b15729f18-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3daaac13-930b-48fc-a743-9e5b15729f18" (UID: "3daaac13-930b-48fc-a743-9e5b15729f18"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:32:41.508807 kubelet[2372]: I0213 19:32:41.508792 2372 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3daaac13-930b-48fc-a743-9e5b15729f18-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3daaac13-930b-48fc-a743-9e5b15729f18" (UID: "3daaac13-930b-48fc-a743-9e5b15729f18"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:32:41.508921 kubelet[2372]: I0213 19:32:41.508905 2372 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3daaac13-930b-48fc-a743-9e5b15729f18-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3daaac13-930b-48fc-a743-9e5b15729f18" (UID: "3daaac13-930b-48fc-a743-9e5b15729f18"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:32:41.520590 kubelet[2372]: I0213 19:32:41.520534 2372 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3daaac13-930b-48fc-a743-9e5b15729f18-kube-api-access-68xpj" (OuterVolumeSpecName: "kube-api-access-68xpj") pod "3daaac13-930b-48fc-a743-9e5b15729f18" (UID: "3daaac13-930b-48fc-a743-9e5b15729f18"). InnerVolumeSpecName "kube-api-access-68xpj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 19:32:41.521129 kubelet[2372]: I0213 19:32:41.521080 2372 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3daaac13-930b-48fc-a743-9e5b15729f18-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3daaac13-930b-48fc-a743-9e5b15729f18" (UID: "3daaac13-930b-48fc-a743-9e5b15729f18"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 13 19:32:41.525226 systemd[1]: var-lib-kubelet-pods-3daaac13\x2d930b\x2d48fc\x2da743\x2d9e5b15729f18-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 19:32:41.530948 kubelet[2372]: I0213 19:32:41.530901 2372 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3daaac13-930b-48fc-a743-9e5b15729f18-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3daaac13-930b-48fc-a743-9e5b15729f18" (UID: "3daaac13-930b-48fc-a743-9e5b15729f18"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 19:32:41.533865 kubelet[2372]: I0213 19:32:41.533739 2372 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3daaac13-930b-48fc-a743-9e5b15729f18-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3daaac13-930b-48fc-a743-9e5b15729f18" (UID: "3daaac13-930b-48fc-a743-9e5b15729f18"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 13 19:32:41.604478 kubelet[2372]: I0213 19:32:41.604421 2372 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3daaac13-930b-48fc-a743-9e5b15729f18-hubble-tls\") on node \"172.31.18.16\" DevicePath \"\"" Feb 13 19:32:41.604478 kubelet[2372]: I0213 19:32:41.604466 2372 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3daaac13-930b-48fc-a743-9e5b15729f18-bpf-maps\") on node \"172.31.18.16\" DevicePath \"\"" Feb 13 19:32:41.604478 kubelet[2372]: I0213 19:32:41.604481 2372 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3daaac13-930b-48fc-a743-9e5b15729f18-clustermesh-secrets\") on node \"172.31.18.16\" DevicePath \"\"" Feb 13 19:32:41.604740 kubelet[2372]: I0213 19:32:41.604498 2372 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-68xpj\" (UniqueName: \"kubernetes.io/projected/3daaac13-930b-48fc-a743-9e5b15729f18-kube-api-access-68xpj\") on node \"172.31.18.16\" DevicePath \"\"" Feb 13 19:32:41.604740 kubelet[2372]: I0213 19:32:41.604512 2372 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3daaac13-930b-48fc-a743-9e5b15729f18-cilium-cgroup\") on node \"172.31.18.16\" DevicePath \"\"" Feb 13 19:32:41.604740 kubelet[2372]: I0213 19:32:41.604524 2372 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3daaac13-930b-48fc-a743-9e5b15729f18-xtables-lock\") on node \"172.31.18.16\" DevicePath \"\"" Feb 13 19:32:41.604740 kubelet[2372]: I0213 19:32:41.604535 2372 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3daaac13-930b-48fc-a743-9e5b15729f18-cilium-run\") on node \"172.31.18.16\" DevicePath \"\"" Feb 13 19:32:41.604740 kubelet[2372]: I0213 19:32:41.604546 2372 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3daaac13-930b-48fc-a743-9e5b15729f18-lib-modules\") on node \"172.31.18.16\" DevicePath \"\"" Feb 13 19:32:41.604740 kubelet[2372]: I0213 19:32:41.604556 2372 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3daaac13-930b-48fc-a743-9e5b15729f18-cilium-config-path\") on node \"172.31.18.16\" DevicePath \"\"" Feb 13 19:32:41.604740 kubelet[2372]: I0213 19:32:41.604569 2372 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3daaac13-930b-48fc-a743-9e5b15729f18-cni-path\") on node \"172.31.18.16\" DevicePath \"\"" Feb 13 19:32:41.604740 kubelet[2372]: I0213 19:32:41.604581 2372 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3daaac13-930b-48fc-a743-9e5b15729f18-host-proc-sys-net\") on node \"172.31.18.16\" DevicePath \"\"" Feb 13 19:32:41.604937 kubelet[2372]: I0213 19:32:41.604593 2372 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3daaac13-930b-48fc-a743-9e5b15729f18-hostproc\") on node \"172.31.18.16\" DevicePath \"\"" Feb 13 19:32:41.604937 kubelet[2372]: I0213 19:32:41.604604 2372 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3daaac13-930b-48fc-a743-9e5b15729f18-etc-cni-netd\") on node \"172.31.18.16\" DevicePath \"\"" Feb 13 19:32:41.604937 kubelet[2372]: I0213 19:32:41.604615 2372 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3daaac13-930b-48fc-a743-9e5b15729f18-host-proc-sys-kernel\") on node \"172.31.18.16\" DevicePath \"\"" Feb 13 19:32:41.630283 systemd[1]: var-lib-kubelet-pods-3daaac13\x2d930b\x2d48fc\x2da743\x2d9e5b15729f18-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d68xpj.mount: Deactivated successfully. Feb 13 19:32:41.632622 systemd[1]: var-lib-kubelet-pods-3daaac13\x2d930b\x2d48fc\x2da743\x2d9e5b15729f18-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 19:32:41.660348 kubelet[2372]: I0213 19:32:41.659146 2372 scope.go:117] "RemoveContainer" containerID="daceadd967cd673b018b96d80655fd12ac3e688ba5f389778f40866c1f98e246" Feb 13 19:32:41.670234 systemd[1]: Removed slice kubepods-burstable-pod3daaac13_930b_48fc_a743_9e5b15729f18.slice - libcontainer container kubepods-burstable-pod3daaac13_930b_48fc_a743_9e5b15729f18.slice. Feb 13 19:32:41.670431 systemd[1]: kubepods-burstable-pod3daaac13_930b_48fc_a743_9e5b15729f18.slice: Consumed 8.098s CPU time, 119.8M memory peak, 792K read from disk, 13.3M written to disk. Feb 13 19:32:41.675093 containerd[1908]: time="2025-02-13T19:32:41.675054947Z" level=info msg="RemoveContainer for \"daceadd967cd673b018b96d80655fd12ac3e688ba5f389778f40866c1f98e246\"" Feb 13 19:32:41.678217 containerd[1908]: time="2025-02-13T19:32:41.678182112Z" level=info msg="RemoveContainer for \"daceadd967cd673b018b96d80655fd12ac3e688ba5f389778f40866c1f98e246\" returns successfully" Feb 13 19:32:41.678919 kubelet[2372]: I0213 19:32:41.678881 2372 scope.go:117] "RemoveContainer" containerID="05a2e127f2ea7f75752a7448c91e51104193a060c73c53fc342712acbaef7081" Feb 13 19:32:41.681439 containerd[1908]: time="2025-02-13T19:32:41.681400348Z" level=info msg="RemoveContainer for \"05a2e127f2ea7f75752a7448c91e51104193a060c73c53fc342712acbaef7081\"" Feb 13 19:32:41.687433 containerd[1908]: time="2025-02-13T19:32:41.687385041Z" level=info msg="RemoveContainer for \"05a2e127f2ea7f75752a7448c91e51104193a060c73c53fc342712acbaef7081\" returns successfully" Feb 13 19:32:41.687723 kubelet[2372]: I0213 19:32:41.687682 2372 scope.go:117] "RemoveContainer" containerID="d9580974b388e4cafcffbc3c894c809b1262ce977087d4a5ed2d80d2f05bd5e8" Feb 13 19:32:41.690570 containerd[1908]: time="2025-02-13T19:32:41.690340607Z" level=info msg="RemoveContainer for \"d9580974b388e4cafcffbc3c894c809b1262ce977087d4a5ed2d80d2f05bd5e8\"" Feb 13 19:32:41.696959 containerd[1908]: time="2025-02-13T19:32:41.696916929Z" level=info msg="RemoveContainer for \"d9580974b388e4cafcffbc3c894c809b1262ce977087d4a5ed2d80d2f05bd5e8\" returns successfully" Feb 13 19:32:41.705356 kubelet[2372]: I0213 19:32:41.697509 2372 scope.go:117] "RemoveContainer" containerID="b0035e5fed5e8f4e7403436223a486e4eef4685240f2588cbb52c14502f3384d" Feb 13 19:32:41.706336 containerd[1908]: time="2025-02-13T19:32:41.706097420Z" level=info msg="RemoveContainer for \"b0035e5fed5e8f4e7403436223a486e4eef4685240f2588cbb52c14502f3384d\"" Feb 13 19:32:41.749119 containerd[1908]: time="2025-02-13T19:32:41.746829050Z" level=info msg="RemoveContainer for \"b0035e5fed5e8f4e7403436223a486e4eef4685240f2588cbb52c14502f3384d\" returns successfully" Feb 13 19:32:41.749533 kubelet[2372]: I0213 19:32:41.747168 2372 scope.go:117] "RemoveContainer" containerID="302eba125a6877fb8befa2a2bec11e8028b6a8a176d2d1c2e11caa4ae1d5c317" Feb 13 19:32:41.751335 containerd[1908]: time="2025-02-13T19:32:41.749817438Z" level=info msg="RemoveContainer for \"302eba125a6877fb8befa2a2bec11e8028b6a8a176d2d1c2e11caa4ae1d5c317\"" Feb 13 19:32:41.772900 containerd[1908]: time="2025-02-13T19:32:41.772847797Z" level=info msg="RemoveContainer for \"302eba125a6877fb8befa2a2bec11e8028b6a8a176d2d1c2e11caa4ae1d5c317\" returns successfully" Feb 13 19:32:41.773378 kubelet[2372]: I0213 19:32:41.773350 2372 scope.go:117] "RemoveContainer" containerID="daceadd967cd673b018b96d80655fd12ac3e688ba5f389778f40866c1f98e246" Feb 13 19:32:41.773888 containerd[1908]: time="2025-02-13T19:32:41.773814277Z" level=error msg="ContainerStatus for \"daceadd967cd673b018b96d80655fd12ac3e688ba5f389778f40866c1f98e246\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"daceadd967cd673b018b96d80655fd12ac3e688ba5f389778f40866c1f98e246\": not found" Feb 13 19:32:41.781319 kubelet[2372]: E0213 19:32:41.781271 2372 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"daceadd967cd673b018b96d80655fd12ac3e688ba5f389778f40866c1f98e246\": not found" containerID="daceadd967cd673b018b96d80655fd12ac3e688ba5f389778f40866c1f98e246" Feb 13 19:32:41.781508 kubelet[2372]: I0213 19:32:41.781340 2372 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"daceadd967cd673b018b96d80655fd12ac3e688ba5f389778f40866c1f98e246"} err="failed to get container status \"daceadd967cd673b018b96d80655fd12ac3e688ba5f389778f40866c1f98e246\": rpc error: code = NotFound desc = an error occurred when try to find container \"daceadd967cd673b018b96d80655fd12ac3e688ba5f389778f40866c1f98e246\": not found" Feb 13 19:32:41.781508 kubelet[2372]: I0213 19:32:41.781400 2372 scope.go:117] "RemoveContainer" containerID="05a2e127f2ea7f75752a7448c91e51104193a060c73c53fc342712acbaef7081" Feb 13 19:32:41.781863 containerd[1908]: time="2025-02-13T19:32:41.781812234Z" level=error msg="ContainerStatus for \"05a2e127f2ea7f75752a7448c91e51104193a060c73c53fc342712acbaef7081\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"05a2e127f2ea7f75752a7448c91e51104193a060c73c53fc342712acbaef7081\": not found" Feb 13 19:32:41.782151 kubelet[2372]: E0213 19:32:41.781993 2372 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"05a2e127f2ea7f75752a7448c91e51104193a060c73c53fc342712acbaef7081\": not found" containerID="05a2e127f2ea7f75752a7448c91e51104193a060c73c53fc342712acbaef7081" Feb 13 19:32:41.782151 kubelet[2372]: I0213 19:32:41.782019 2372 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"05a2e127f2ea7f75752a7448c91e51104193a060c73c53fc342712acbaef7081"} err="failed to get container status \"05a2e127f2ea7f75752a7448c91e51104193a060c73c53fc342712acbaef7081\": rpc error: code = NotFound desc = an error occurred when try to find container \"05a2e127f2ea7f75752a7448c91e51104193a060c73c53fc342712acbaef7081\": not found" Feb 13 19:32:41.782151 kubelet[2372]: I0213 19:32:41.782038 2372 scope.go:117] "RemoveContainer" containerID="d9580974b388e4cafcffbc3c894c809b1262ce977087d4a5ed2d80d2f05bd5e8" Feb 13 19:32:41.782482 containerd[1908]: time="2025-02-13T19:32:41.782437328Z" level=error msg="ContainerStatus for \"d9580974b388e4cafcffbc3c894c809b1262ce977087d4a5ed2d80d2f05bd5e8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d9580974b388e4cafcffbc3c894c809b1262ce977087d4a5ed2d80d2f05bd5e8\": not found" Feb 13 19:32:41.782677 kubelet[2372]: E0213 19:32:41.782654 2372 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d9580974b388e4cafcffbc3c894c809b1262ce977087d4a5ed2d80d2f05bd5e8\": not found" containerID="d9580974b388e4cafcffbc3c894c809b1262ce977087d4a5ed2d80d2f05bd5e8" Feb 13 19:32:41.782769 kubelet[2372]: I0213 19:32:41.782682 2372 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d9580974b388e4cafcffbc3c894c809b1262ce977087d4a5ed2d80d2f05bd5e8"} err="failed to get container status \"d9580974b388e4cafcffbc3c894c809b1262ce977087d4a5ed2d80d2f05bd5e8\": rpc error: code = NotFound desc = an error occurred when try to find container \"d9580974b388e4cafcffbc3c894c809b1262ce977087d4a5ed2d80d2f05bd5e8\": not found" Feb 13 19:32:41.782769 kubelet[2372]: I0213 19:32:41.782702 2372 scope.go:117] "RemoveContainer" containerID="b0035e5fed5e8f4e7403436223a486e4eef4685240f2588cbb52c14502f3384d" Feb 13 19:32:41.782915 containerd[1908]: time="2025-02-13T19:32:41.782888751Z" level=error msg="ContainerStatus for \"b0035e5fed5e8f4e7403436223a486e4eef4685240f2588cbb52c14502f3384d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b0035e5fed5e8f4e7403436223a486e4eef4685240f2588cbb52c14502f3384d\": not found" Feb 13 19:32:41.783137 kubelet[2372]: E0213 19:32:41.783114 2372 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b0035e5fed5e8f4e7403436223a486e4eef4685240f2588cbb52c14502f3384d\": not found" containerID="b0035e5fed5e8f4e7403436223a486e4eef4685240f2588cbb52c14502f3384d" Feb 13 19:32:41.783213 kubelet[2372]: I0213 19:32:41.783141 2372 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b0035e5fed5e8f4e7403436223a486e4eef4685240f2588cbb52c14502f3384d"} err="failed to get container status \"b0035e5fed5e8f4e7403436223a486e4eef4685240f2588cbb52c14502f3384d\": rpc error: code = NotFound desc = an error occurred when try to find container \"b0035e5fed5e8f4e7403436223a486e4eef4685240f2588cbb52c14502f3384d\": not found" Feb 13 19:32:41.783213 kubelet[2372]: I0213 19:32:41.783166 2372 scope.go:117] "RemoveContainer" containerID="302eba125a6877fb8befa2a2bec11e8028b6a8a176d2d1c2e11caa4ae1d5c317" Feb 13 19:32:41.783474 containerd[1908]: time="2025-02-13T19:32:41.783434370Z" level=error msg="ContainerStatus for \"302eba125a6877fb8befa2a2bec11e8028b6a8a176d2d1c2e11caa4ae1d5c317\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"302eba125a6877fb8befa2a2bec11e8028b6a8a176d2d1c2e11caa4ae1d5c317\": not found" Feb 13 19:32:41.783653 kubelet[2372]: E0213 19:32:41.783626 2372 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"302eba125a6877fb8befa2a2bec11e8028b6a8a176d2d1c2e11caa4ae1d5c317\": not found" containerID="302eba125a6877fb8befa2a2bec11e8028b6a8a176d2d1c2e11caa4ae1d5c317" Feb 13 19:32:41.783735 kubelet[2372]: I0213 19:32:41.783656 2372 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"302eba125a6877fb8befa2a2bec11e8028b6a8a176d2d1c2e11caa4ae1d5c317"} err="failed to get container status \"302eba125a6877fb8befa2a2bec11e8028b6a8a176d2d1c2e11caa4ae1d5c317\": rpc error: code = NotFound desc = an error occurred when try to find container \"302eba125a6877fb8befa2a2bec11e8028b6a8a176d2d1c2e11caa4ae1d5c317\": not found" Feb 13 19:32:42.187459 kubelet[2372]: E0213 19:32:42.187401 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:42.293317 kubelet[2372]: I0213 19:32:42.293249 2372 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3daaac13-930b-48fc-a743-9e5b15729f18" path="/var/lib/kubelet/pods/3daaac13-930b-48fc-a743-9e5b15729f18/volumes" Feb 13 19:32:43.187821 kubelet[2372]: E0213 19:32:43.187760 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:43.667886 kubelet[2372]: I0213 19:32:43.667844 2372 memory_manager.go:355] "RemoveStaleState removing state" podUID="3daaac13-930b-48fc-a743-9e5b15729f18" containerName="cilium-agent" Feb 13 19:32:43.689158 kubelet[2372]: I0213 19:32:43.688139 2372 status_manager.go:890] "Failed to get status for pod" podUID="8e72874e-c408-45c4-92fc-b4536b3e74d6" pod="kube-system/cilium-vjn68" err="pods \"cilium-vjn68\" is forbidden: User \"system:node:172.31.18.16\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172.31.18.16' and this object" Feb 13 19:32:43.694018 ntpd[1885]: Deleting interface #11 lxc_health, fe80::ec3c:f2ff:fea1:e4f%7#123, interface stats: received=0, sent=0, dropped=0, active_time=39 secs Feb 13 19:32:43.694622 ntpd[1885]: 13 Feb 19:32:43 ntpd[1885]: Deleting interface #11 lxc_health, fe80::ec3c:f2ff:fea1:e4f%7#123, interface stats: received=0, sent=0, dropped=0, active_time=39 secs Feb 13 19:32:43.698224 systemd[1]: Created slice kubepods-burstable-pod8e72874e_c408_45c4_92fc_b4536b3e74d6.slice - libcontainer container kubepods-burstable-pod8e72874e_c408_45c4_92fc_b4536b3e74d6.slice. Feb 13 19:32:43.718257 kubelet[2372]: I0213 19:32:43.718216 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8e72874e-c408-45c4-92fc-b4536b3e74d6-cni-path\") pod \"cilium-vjn68\" (UID: \"8e72874e-c408-45c4-92fc-b4536b3e74d6\") " pod="kube-system/cilium-vjn68" Feb 13 19:32:43.718548 kubelet[2372]: I0213 19:32:43.718528 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8e72874e-c408-45c4-92fc-b4536b3e74d6-clustermesh-secrets\") pod \"cilium-vjn68\" (UID: \"8e72874e-c408-45c4-92fc-b4536b3e74d6\") " pod="kube-system/cilium-vjn68" Feb 13 19:32:43.719087 kubelet[2372]: I0213 19:32:43.719061 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8e72874e-c408-45c4-92fc-b4536b3e74d6-cilium-ipsec-secrets\") pod \"cilium-vjn68\" (UID: \"8e72874e-c408-45c4-92fc-b4536b3e74d6\") " pod="kube-system/cilium-vjn68" Feb 13 19:32:43.719288 kubelet[2372]: I0213 19:32:43.719242 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8e72874e-c408-45c4-92fc-b4536b3e74d6-bpf-maps\") pod \"cilium-vjn68\" (UID: \"8e72874e-c408-45c4-92fc-b4536b3e74d6\") " pod="kube-system/cilium-vjn68" Feb 13 19:32:43.719519 kubelet[2372]: I0213 19:32:43.719394 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8e72874e-c408-45c4-92fc-b4536b3e74d6-cilium-cgroup\") pod \"cilium-vjn68\" (UID: \"8e72874e-c408-45c4-92fc-b4536b3e74d6\") " pod="kube-system/cilium-vjn68" Feb 13 19:32:43.719519 kubelet[2372]: I0213 19:32:43.719430 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8e72874e-c408-45c4-92fc-b4536b3e74d6-etc-cni-netd\") pod \"cilium-vjn68\" (UID: \"8e72874e-c408-45c4-92fc-b4536b3e74d6\") " pod="kube-system/cilium-vjn68" Feb 13 19:32:43.719519 kubelet[2372]: I0213 19:32:43.719491 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8e72874e-c408-45c4-92fc-b4536b3e74d6-host-proc-sys-net\") pod \"cilium-vjn68\" (UID: \"8e72874e-c408-45c4-92fc-b4536b3e74d6\") " pod="kube-system/cilium-vjn68" Feb 13 19:32:43.719736 systemd[1]: Created slice kubepods-besteffort-podea596151_f44a_4029_80a3_74380ecbfe13.slice - libcontainer container kubepods-besteffort-podea596151_f44a_4029_80a3_74380ecbfe13.slice. Feb 13 19:32:43.723033 kubelet[2372]: I0213 19:32:43.719730 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8e72874e-c408-45c4-92fc-b4536b3e74d6-host-proc-sys-kernel\") pod \"cilium-vjn68\" (UID: \"8e72874e-c408-45c4-92fc-b4536b3e74d6\") " pod="kube-system/cilium-vjn68" Feb 13 19:32:43.723033 kubelet[2372]: I0213 19:32:43.719794 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8e72874e-c408-45c4-92fc-b4536b3e74d6-cilium-run\") pod \"cilium-vjn68\" (UID: \"8e72874e-c408-45c4-92fc-b4536b3e74d6\") " pod="kube-system/cilium-vjn68" Feb 13 19:32:43.723033 kubelet[2372]: I0213 19:32:43.719850 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e72874e-c408-45c4-92fc-b4536b3e74d6-lib-modules\") pod \"cilium-vjn68\" (UID: \"8e72874e-c408-45c4-92fc-b4536b3e74d6\") " pod="kube-system/cilium-vjn68" Feb 13 19:32:43.723033 kubelet[2372]: I0213 19:32:43.719880 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e72874e-c408-45c4-92fc-b4536b3e74d6-xtables-lock\") pod \"cilium-vjn68\" (UID: \"8e72874e-c408-45c4-92fc-b4536b3e74d6\") " pod="kube-system/cilium-vjn68" Feb 13 19:32:43.723033 kubelet[2372]: I0213 19:32:43.719951 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8e72874e-c408-45c4-92fc-b4536b3e74d6-hostproc\") pod \"cilium-vjn68\" (UID: \"8e72874e-c408-45c4-92fc-b4536b3e74d6\") " pod="kube-system/cilium-vjn68" Feb 13 19:32:43.723033 kubelet[2372]: I0213 19:32:43.721031 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e72874e-c408-45c4-92fc-b4536b3e74d6-cilium-config-path\") pod \"cilium-vjn68\" (UID: \"8e72874e-c408-45c4-92fc-b4536b3e74d6\") " pod="kube-system/cilium-vjn68" Feb 13 19:32:43.724323 kubelet[2372]: I0213 19:32:43.721071 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8e72874e-c408-45c4-92fc-b4536b3e74d6-hubble-tls\") pod \"cilium-vjn68\" (UID: \"8e72874e-c408-45c4-92fc-b4536b3e74d6\") " pod="kube-system/cilium-vjn68" Feb 13 19:32:43.724323 kubelet[2372]: I0213 19:32:43.721099 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8zpz\" (UniqueName: \"kubernetes.io/projected/8e72874e-c408-45c4-92fc-b4536b3e74d6-kube-api-access-m8zpz\") pod \"cilium-vjn68\" (UID: \"8e72874e-c408-45c4-92fc-b4536b3e74d6\") " pod="kube-system/cilium-vjn68" Feb 13 19:32:43.822059 kubelet[2372]: I0213 19:32:43.821909 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrscl\" (UniqueName: \"kubernetes.io/projected/ea596151-f44a-4029-80a3-74380ecbfe13-kube-api-access-hrscl\") pod \"cilium-operator-6c4d7847fc-jbt8z\" (UID: \"ea596151-f44a-4029-80a3-74380ecbfe13\") " pod="kube-system/cilium-operator-6c4d7847fc-jbt8z" Feb 13 19:32:43.822247 kubelet[2372]: I0213 19:32:43.822223 2372 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ea596151-f44a-4029-80a3-74380ecbfe13-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-jbt8z\" (UID: \"ea596151-f44a-4029-80a3-74380ecbfe13\") " pod="kube-system/cilium-operator-6c4d7847fc-jbt8z" Feb 13 19:32:44.015917 containerd[1908]: time="2025-02-13T19:32:44.015846145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vjn68,Uid:8e72874e-c408-45c4-92fc-b4536b3e74d6,Namespace:kube-system,Attempt:0,}" Feb 13 19:32:44.037356 containerd[1908]: time="2025-02-13T19:32:44.036282009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jbt8z,Uid:ea596151-f44a-4029-80a3-74380ecbfe13,Namespace:kube-system,Attempt:0,}" Feb 13 19:32:44.068379 containerd[1908]: time="2025-02-13T19:32:44.067993791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:32:44.068379 containerd[1908]: time="2025-02-13T19:32:44.068228379Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:32:44.068379 containerd[1908]: time="2025-02-13T19:32:44.068255140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:32:44.068806 containerd[1908]: time="2025-02-13T19:32:44.068748552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:32:44.078095 containerd[1908]: time="2025-02-13T19:32:44.077965240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:32:44.078095 containerd[1908]: time="2025-02-13T19:32:44.078026073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:32:44.078095 containerd[1908]: time="2025-02-13T19:32:44.078047681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:32:44.079925 containerd[1908]: time="2025-02-13T19:32:44.079504853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:32:44.097707 systemd[1]: Started cri-containerd-4ea480bdd650d78e8b26fede11e42f34a8590c16fad919491eb518151baf38bb.scope - libcontainer container 4ea480bdd650d78e8b26fede11e42f34a8590c16fad919491eb518151baf38bb. Feb 13 19:32:44.121223 systemd[1]: Started cri-containerd-64d91f6c089be8e2b2a9032d08841f9442b6b473223797915f4da75a4df46c2a.scope - libcontainer container 64d91f6c089be8e2b2a9032d08841f9442b6b473223797915f4da75a4df46c2a. Feb 13 19:32:44.178571 containerd[1908]: time="2025-02-13T19:32:44.178527692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vjn68,Uid:8e72874e-c408-45c4-92fc-b4536b3e74d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ea480bdd650d78e8b26fede11e42f34a8590c16fad919491eb518151baf38bb\"" Feb 13 19:32:44.183749 containerd[1908]: time="2025-02-13T19:32:44.183499602Z" level=info msg="CreateContainer within sandbox \"4ea480bdd650d78e8b26fede11e42f34a8590c16fad919491eb518151baf38bb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:32:44.189041 kubelet[2372]: E0213 19:32:44.188901 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:44.242018 containerd[1908]: time="2025-02-13T19:32:44.241973734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jbt8z,Uid:ea596151-f44a-4029-80a3-74380ecbfe13,Namespace:kube-system,Attempt:0,} returns sandbox id \"64d91f6c089be8e2b2a9032d08841f9442b6b473223797915f4da75a4df46c2a\"" Feb 13 19:32:44.245031 containerd[1908]: time="2025-02-13T19:32:44.244673593Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 19:32:44.248128 containerd[1908]: time="2025-02-13T19:32:44.247833217Z" level=info msg="CreateContainer within sandbox \"4ea480bdd650d78e8b26fede11e42f34a8590c16fad919491eb518151baf38bb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d7e5bdf829332e66d13a5cfdaa26e99310cf9da2c911616c1d1390f720ca674e\"" Feb 13 19:32:44.249334 containerd[1908]: time="2025-02-13T19:32:44.248550964Z" level=info msg="StartContainer for \"d7e5bdf829332e66d13a5cfdaa26e99310cf9da2c911616c1d1390f720ca674e\"" Feb 13 19:32:44.283531 systemd[1]: Started cri-containerd-d7e5bdf829332e66d13a5cfdaa26e99310cf9da2c911616c1d1390f720ca674e.scope - libcontainer container d7e5bdf829332e66d13a5cfdaa26e99310cf9da2c911616c1d1390f720ca674e. Feb 13 19:32:44.356728 containerd[1908]: time="2025-02-13T19:32:44.356687317Z" level=info msg="StartContainer for \"d7e5bdf829332e66d13a5cfdaa26e99310cf9da2c911616c1d1390f720ca674e\" returns successfully" Feb 13 19:32:44.539600 systemd[1]: cri-containerd-d7e5bdf829332e66d13a5cfdaa26e99310cf9da2c911616c1d1390f720ca674e.scope: Deactivated successfully. Feb 13 19:32:44.540158 systemd[1]: cri-containerd-d7e5bdf829332e66d13a5cfdaa26e99310cf9da2c911616c1d1390f720ca674e.scope: Consumed 22ms CPU time, 9.8M memory peak, 3.3M read from disk. Feb 13 19:32:44.609747 containerd[1908]: time="2025-02-13T19:32:44.609664783Z" level=info msg="shim disconnected" id=d7e5bdf829332e66d13a5cfdaa26e99310cf9da2c911616c1d1390f720ca674e namespace=k8s.io Feb 13 19:32:44.609747 containerd[1908]: time="2025-02-13T19:32:44.609728864Z" level=warning msg="cleaning up after shim disconnected" id=d7e5bdf829332e66d13a5cfdaa26e99310cf9da2c911616c1d1390f720ca674e namespace=k8s.io Feb 13 19:32:44.609747 containerd[1908]: time="2025-02-13T19:32:44.609742448Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:32:44.674247 containerd[1908]: time="2025-02-13T19:32:44.674201471Z" level=info msg="CreateContainer within sandbox \"4ea480bdd650d78e8b26fede11e42f34a8590c16fad919491eb518151baf38bb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:32:44.687490 containerd[1908]: time="2025-02-13T19:32:44.687436724Z" level=info msg="CreateContainer within sandbox \"4ea480bdd650d78e8b26fede11e42f34a8590c16fad919491eb518151baf38bb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"569214a099c598342513fb63b68c2e8d92ba3a0acee855c087a2c08936e32ebf\"" Feb 13 19:32:44.694152 containerd[1908]: time="2025-02-13T19:32:44.694005721Z" level=info msg="StartContainer for \"569214a099c598342513fb63b68c2e8d92ba3a0acee855c087a2c08936e32ebf\"" Feb 13 19:32:44.771672 systemd[1]: Started cri-containerd-569214a099c598342513fb63b68c2e8d92ba3a0acee855c087a2c08936e32ebf.scope - libcontainer container 569214a099c598342513fb63b68c2e8d92ba3a0acee855c087a2c08936e32ebf. Feb 13 19:32:44.804776 containerd[1908]: time="2025-02-13T19:32:44.804659517Z" level=info msg="StartContainer for \"569214a099c598342513fb63b68c2e8d92ba3a0acee855c087a2c08936e32ebf\" returns successfully" Feb 13 19:32:44.822530 systemd[1]: cri-containerd-569214a099c598342513fb63b68c2e8d92ba3a0acee855c087a2c08936e32ebf.scope: Deactivated successfully. Feb 13 19:32:44.822940 systemd[1]: cri-containerd-569214a099c598342513fb63b68c2e8d92ba3a0acee855c087a2c08936e32ebf.scope: Consumed 20ms CPU time, 7.4M memory peak, 2.2M read from disk. Feb 13 19:32:44.896634 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-569214a099c598342513fb63b68c2e8d92ba3a0acee855c087a2c08936e32ebf-rootfs.mount: Deactivated successfully. Feb 13 19:32:44.907349 containerd[1908]: time="2025-02-13T19:32:44.907238206Z" level=info msg="shim disconnected" id=569214a099c598342513fb63b68c2e8d92ba3a0acee855c087a2c08936e32ebf namespace=k8s.io Feb 13 19:32:44.907349 containerd[1908]: time="2025-02-13T19:32:44.907319778Z" level=warning msg="cleaning up after shim disconnected" id=569214a099c598342513fb63b68c2e8d92ba3a0acee855c087a2c08936e32ebf namespace=k8s.io Feb 13 19:32:44.907349 containerd[1908]: time="2025-02-13T19:32:44.907332750Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:32:45.190357 kubelet[2372]: E0213 19:32:45.190203 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:45.424079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1821250684.mount: Deactivated successfully. Feb 13 19:32:45.679853 containerd[1908]: time="2025-02-13T19:32:45.679807688Z" level=info msg="CreateContainer within sandbox \"4ea480bdd650d78e8b26fede11e42f34a8590c16fad919491eb518151baf38bb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:32:45.760161 containerd[1908]: time="2025-02-13T19:32:45.757704464Z" level=info msg="CreateContainer within sandbox \"4ea480bdd650d78e8b26fede11e42f34a8590c16fad919491eb518151baf38bb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"56e397cd5b0eba31dc47fcff310b27bf7d9da7c9f177311f24585fcf3880dc4e\"" Feb 13 19:32:45.761199 containerd[1908]: time="2025-02-13T19:32:45.761124933Z" level=info msg="StartContainer for \"56e397cd5b0eba31dc47fcff310b27bf7d9da7c9f177311f24585fcf3880dc4e\"" Feb 13 19:32:45.834534 systemd[1]: Started cri-containerd-56e397cd5b0eba31dc47fcff310b27bf7d9da7c9f177311f24585fcf3880dc4e.scope - libcontainer container 56e397cd5b0eba31dc47fcff310b27bf7d9da7c9f177311f24585fcf3880dc4e. Feb 13 19:32:45.896072 containerd[1908]: time="2025-02-13T19:32:45.896025390Z" level=info msg="StartContainer for \"56e397cd5b0eba31dc47fcff310b27bf7d9da7c9f177311f24585fcf3880dc4e\" returns successfully" Feb 13 19:32:46.116715 systemd[1]: cri-containerd-56e397cd5b0eba31dc47fcff310b27bf7d9da7c9f177311f24585fcf3880dc4e.scope: Deactivated successfully. Feb 13 19:32:46.174086 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56e397cd5b0eba31dc47fcff310b27bf7d9da7c9f177311f24585fcf3880dc4e-rootfs.mount: Deactivated successfully. Feb 13 19:32:46.191051 kubelet[2372]: E0213 19:32:46.190992 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:46.257959 containerd[1908]: time="2025-02-13T19:32:46.257869744Z" level=info msg="shim disconnected" id=56e397cd5b0eba31dc47fcff310b27bf7d9da7c9f177311f24585fcf3880dc4e namespace=k8s.io Feb 13 19:32:46.257959 containerd[1908]: time="2025-02-13T19:32:46.257951587Z" level=warning msg="cleaning up after shim disconnected" id=56e397cd5b0eba31dc47fcff310b27bf7d9da7c9f177311f24585fcf3880dc4e namespace=k8s.io Feb 13 19:32:46.257959 containerd[1908]: time="2025-02-13T19:32:46.257963937Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:32:46.286381 containerd[1908]: time="2025-02-13T19:32:46.286329953Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:32:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:32:46.292806 kubelet[2372]: E0213 19:32:46.292693 2372 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:32:46.492111 containerd[1908]: time="2025-02-13T19:32:46.492057933Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:32:46.493118 containerd[1908]: time="2025-02-13T19:32:46.493068667Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 13 19:32:46.494229 containerd[1908]: time="2025-02-13T19:32:46.493995429Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:32:46.495434 containerd[1908]: time="2025-02-13T19:32:46.495402836Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.250692911s" Feb 13 19:32:46.495557 containerd[1908]: time="2025-02-13T19:32:46.495537385Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 19:32:46.498158 containerd[1908]: time="2025-02-13T19:32:46.498130060Z" level=info msg="CreateContainer within sandbox \"64d91f6c089be8e2b2a9032d08841f9442b6b473223797915f4da75a4df46c2a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 19:32:46.513205 containerd[1908]: time="2025-02-13T19:32:46.513158528Z" level=info msg="CreateContainer within sandbox \"64d91f6c089be8e2b2a9032d08841f9442b6b473223797915f4da75a4df46c2a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"243b70c3c82235bf2e6568da4bcab7e73b18dc0ad9d87f7aecfffe91e3e1f605\"" Feb 13 19:32:46.514045 containerd[1908]: time="2025-02-13T19:32:46.514003858Z" level=info msg="StartContainer for \"243b70c3c82235bf2e6568da4bcab7e73b18dc0ad9d87f7aecfffe91e3e1f605\"" Feb 13 19:32:46.563676 systemd[1]: Started cri-containerd-243b70c3c82235bf2e6568da4bcab7e73b18dc0ad9d87f7aecfffe91e3e1f605.scope - libcontainer container 243b70c3c82235bf2e6568da4bcab7e73b18dc0ad9d87f7aecfffe91e3e1f605. Feb 13 19:32:46.599825 containerd[1908]: time="2025-02-13T19:32:46.599773744Z" level=info msg="StartContainer for \"243b70c3c82235bf2e6568da4bcab7e73b18dc0ad9d87f7aecfffe91e3e1f605\" returns successfully" Feb 13 19:32:46.689224 containerd[1908]: time="2025-02-13T19:32:46.689172542Z" level=info msg="CreateContainer within sandbox \"4ea480bdd650d78e8b26fede11e42f34a8590c16fad919491eb518151baf38bb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:32:46.709208 containerd[1908]: time="2025-02-13T19:32:46.709146056Z" level=info msg="CreateContainer within sandbox \"4ea480bdd650d78e8b26fede11e42f34a8590c16fad919491eb518151baf38bb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3fec41d5fd38cd2430eac2b93590a37bc33b481ed4ba9d4b1ce855a6104ba4b8\"" Feb 13 19:32:46.710938 containerd[1908]: time="2025-02-13T19:32:46.710899212Z" level=info msg="StartContainer for \"3fec41d5fd38cd2430eac2b93590a37bc33b481ed4ba9d4b1ce855a6104ba4b8\"" Feb 13 19:32:46.751693 kubelet[2372]: I0213 19:32:46.749836 2372 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-jbt8z" podStartSLOduration=1.4972956370000001 podStartE2EDuration="3.749809717s" podCreationTimestamp="2025-02-13 19:32:43 +0000 UTC" firstStartedPulling="2025-02-13 19:32:44.244053027 +0000 UTC m=+68.708790733" lastFinishedPulling="2025-02-13 19:32:46.496567103 +0000 UTC m=+70.961304813" observedRunningTime="2025-02-13 19:32:46.721605316 +0000 UTC m=+71.186343032" watchObservedRunningTime="2025-02-13 19:32:46.749809717 +0000 UTC m=+71.214547433" Feb 13 19:32:46.778164 systemd[1]: Started cri-containerd-3fec41d5fd38cd2430eac2b93590a37bc33b481ed4ba9d4b1ce855a6104ba4b8.scope - libcontainer container 3fec41d5fd38cd2430eac2b93590a37bc33b481ed4ba9d4b1ce855a6104ba4b8. Feb 13 19:32:46.829358 containerd[1908]: time="2025-02-13T19:32:46.828947865Z" level=info msg="StartContainer for \"3fec41d5fd38cd2430eac2b93590a37bc33b481ed4ba9d4b1ce855a6104ba4b8\" returns successfully" Feb 13 19:32:46.830463 systemd[1]: cri-containerd-3fec41d5fd38cd2430eac2b93590a37bc33b481ed4ba9d4b1ce855a6104ba4b8.scope: Deactivated successfully. Feb 13 19:32:46.862642 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3fec41d5fd38cd2430eac2b93590a37bc33b481ed4ba9d4b1ce855a6104ba4b8-rootfs.mount: Deactivated successfully. Feb 13 19:32:46.870398 containerd[1908]: time="2025-02-13T19:32:46.870325422Z" level=info msg="shim disconnected" id=3fec41d5fd38cd2430eac2b93590a37bc33b481ed4ba9d4b1ce855a6104ba4b8 namespace=k8s.io Feb 13 19:32:46.870398 containerd[1908]: time="2025-02-13T19:32:46.870392189Z" level=warning msg="cleaning up after shim disconnected" id=3fec41d5fd38cd2430eac2b93590a37bc33b481ed4ba9d4b1ce855a6104ba4b8 namespace=k8s.io Feb 13 19:32:46.870398 containerd[1908]: time="2025-02-13T19:32:46.870403930Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:32:47.192867 kubelet[2372]: E0213 19:32:47.192716 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:47.701319 containerd[1908]: time="2025-02-13T19:32:47.699721520Z" level=info msg="CreateContainer within sandbox \"4ea480bdd650d78e8b26fede11e42f34a8590c16fad919491eb518151baf38bb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:32:47.774529 containerd[1908]: time="2025-02-13T19:32:47.774448066Z" level=info msg="CreateContainer within sandbox \"4ea480bdd650d78e8b26fede11e42f34a8590c16fad919491eb518151baf38bb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9c18a93992ba1164ecd6494b6dc3d1c20883186416f501e77a46c8edcd2d07d6\"" Feb 13 19:32:47.775706 containerd[1908]: time="2025-02-13T19:32:47.775655665Z" level=info msg="StartContainer for \"9c18a93992ba1164ecd6494b6dc3d1c20883186416f501e77a46c8edcd2d07d6\"" Feb 13 19:32:47.811569 systemd[1]: Started cri-containerd-9c18a93992ba1164ecd6494b6dc3d1c20883186416f501e77a46c8edcd2d07d6.scope - libcontainer container 9c18a93992ba1164ecd6494b6dc3d1c20883186416f501e77a46c8edcd2d07d6. Feb 13 19:32:47.856352 containerd[1908]: time="2025-02-13T19:32:47.856160967Z" level=info msg="StartContainer for \"9c18a93992ba1164ecd6494b6dc3d1c20883186416f501e77a46c8edcd2d07d6\" returns successfully" Feb 13 19:32:47.882215 systemd[1]: run-containerd-runc-k8s.io-9c18a93992ba1164ecd6494b6dc3d1c20883186416f501e77a46c8edcd2d07d6-runc.C8yLa6.mount: Deactivated successfully. Feb 13 19:32:48.104549 kubelet[2372]: I0213 19:32:48.104483 2372 setters.go:602] "Node became not ready" node="172.31.18.16" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T19:32:48Z","lastTransitionTime":"2025-02-13T19:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 19:32:48.193028 kubelet[2372]: E0213 19:32:48.192962 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:49.193886 kubelet[2372]: E0213 19:32:49.193826 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:50.194742 kubelet[2372]: E0213 19:32:50.194679 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:50.735951 kubelet[2372]: I0213 19:32:50.735886 2372 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vjn68" podStartSLOduration=7.735864779 podStartE2EDuration="7.735864779s" podCreationTimestamp="2025-02-13 19:32:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:32:50.735789605 +0000 UTC m=+75.200527336" watchObservedRunningTime="2025-02-13 19:32:50.735864779 +0000 UTC m=+75.200602499" Feb 13 19:32:50.963974 systemd[1]: run-containerd-runc-k8s.io-9c18a93992ba1164ecd6494b6dc3d1c20883186416f501e77a46c8edcd2d07d6-runc.uhoaCp.mount: Deactivated successfully. Feb 13 19:32:51.195623 kubelet[2372]: E0213 19:32:51.195469 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:52.196284 kubelet[2372]: E0213 19:32:52.196165 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:53.044412 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 19:32:53.197223 kubelet[2372]: E0213 19:32:53.197157 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:54.197574 kubelet[2372]: E0213 19:32:54.197473 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:55.200553 kubelet[2372]: E0213 19:32:55.200498 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:56.136334 kubelet[2372]: E0213 19:32:56.135786 2372 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:56.201698 kubelet[2372]: E0213 19:32:56.201629 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:56.502483 systemd[1]: run-containerd-runc-k8s.io-9c18a93992ba1164ecd6494b6dc3d1c20883186416f501e77a46c8edcd2d07d6-runc.94hJ7j.mount: Deactivated successfully. Feb 13 19:32:57.202795 kubelet[2372]: E0213 19:32:57.202706 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:57.928488 (udev-worker)[5103]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:32:57.940734 systemd-networkd[1748]: lxc_health: Link UP Feb 13 19:32:57.943177 systemd-networkd[1748]: lxc_health: Gained carrier Feb 13 19:32:58.203577 kubelet[2372]: E0213 19:32:58.203429 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:58.790013 systemd[1]: run-containerd-runc-k8s.io-9c18a93992ba1164ecd6494b6dc3d1c20883186416f501e77a46c8edcd2d07d6-runc.ipz3Hc.mount: Deactivated successfully. Feb 13 19:32:59.154927 kubelet[2372]: E0213 19:32:59.154792 2372 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:56728->127.0.0.1:40811: write tcp 127.0.0.1:56728->127.0.0.1:40811: write: broken pipe Feb 13 19:32:59.203640 kubelet[2372]: E0213 19:32:59.203584 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:32:59.497113 systemd-networkd[1748]: lxc_health: Gained IPv6LL Feb 13 19:33:00.204277 kubelet[2372]: E0213 19:33:00.204205 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:33:01.206057 kubelet[2372]: E0213 19:33:01.205973 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:33:01.694151 ntpd[1885]: Listen normally on 15 lxc_health [fe80::f8df:b4ff:fe3f:12a8%15]:123 Feb 13 19:33:01.695125 ntpd[1885]: 13 Feb 19:33:01 ntpd[1885]: Listen normally on 15 lxc_health [fe80::f8df:b4ff:fe3f:12a8%15]:123 Feb 13 19:33:02.207012 kubelet[2372]: E0213 19:33:02.206939 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:33:03.207579 kubelet[2372]: E0213 19:33:03.207519 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:33:03.538064 systemd[1]: run-containerd-runc-k8s.io-9c18a93992ba1164ecd6494b6dc3d1c20883186416f501e77a46c8edcd2d07d6-runc.7MKV04.mount: Deactivated successfully. Feb 13 19:33:04.208212 kubelet[2372]: E0213 19:33:04.208142 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:33:05.208821 kubelet[2372]: E0213 19:33:05.208721 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:33:06.209833 kubelet[2372]: E0213 19:33:06.209739 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:33:07.210771 kubelet[2372]: E0213 19:33:07.210707 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:33:08.211563 kubelet[2372]: E0213 19:33:08.211498 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:33:09.212464 kubelet[2372]: E0213 19:33:09.212401 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:33:10.212930 kubelet[2372]: E0213 19:33:10.212870 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:33:11.213699 kubelet[2372]: E0213 19:33:11.213644 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:33:12.214649 kubelet[2372]: E0213 19:33:12.214583 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:33:13.215331 kubelet[2372]: E0213 19:33:13.215261 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:33:14.215885 kubelet[2372]: E0213 19:33:14.215828 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:33:15.216703 kubelet[2372]: E0213 19:33:15.216642 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:33:16.133444 kubelet[2372]: E0213 19:33:16.133391 2372 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:33:16.217046 kubelet[2372]: E0213 19:33:16.216990 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:33:17.217434 kubelet[2372]: E0213 19:33:17.217369 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:33:18.218114 kubelet[2372]: E0213 19:33:18.218052 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:33:19.218672 kubelet[2372]: E0213 19:33:19.218611 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:33:20.219211 kubelet[2372]: E0213 19:33:20.219151 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:33:21.220269 kubelet[2372]: E0213 19:33:21.220203 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:33:22.221116 kubelet[2372]: E0213 19:33:22.221063 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:33:23.221524 kubelet[2372]: E0213 19:33:23.221463 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:33:24.222513 kubelet[2372]: E0213 19:33:24.222416 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:33:25.223576 kubelet[2372]: E0213 19:33:25.223516 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:33:26.223996 kubelet[2372]: E0213 19:33:26.223920 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:33:26.929845 systemd[1]: cri-containerd-243b70c3c82235bf2e6568da4bcab7e73b18dc0ad9d87f7aecfffe91e3e1f605.scope: Deactivated successfully. Feb 13 19:33:26.962818 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-243b70c3c82235bf2e6568da4bcab7e73b18dc0ad9d87f7aecfffe91e3e1f605-rootfs.mount: Deactivated successfully. Feb 13 19:33:26.974230 containerd[1908]: time="2025-02-13T19:33:26.973836533Z" level=info msg="shim disconnected" id=243b70c3c82235bf2e6568da4bcab7e73b18dc0ad9d87f7aecfffe91e3e1f605 namespace=k8s.io Feb 13 19:33:26.974230 containerd[1908]: time="2025-02-13T19:33:26.973966036Z" level=warning msg="cleaning up after shim disconnected" id=243b70c3c82235bf2e6568da4bcab7e73b18dc0ad9d87f7aecfffe91e3e1f605 namespace=k8s.io Feb 13 19:33:26.974230 containerd[1908]: time="2025-02-13T19:33:26.973982176Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:33:27.224190 kubelet[2372]: E0213 19:33:27.224131 2372 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:33:27.583043 kubelet[2372]: E0213 19:33:27.582882 2372 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.18.16?timeout=10s\": context deadline exceeded" Feb 13 19:33:27.800328 kubelet[2372]: I0213 19:33:27.799521 2372 scope.go:117] "RemoveContainer" containerID="243b70c3c82235bf2e6568da4bcab7e73b18dc0ad9d87f7aecfffe91e3e1f605" Feb 13 19:33:27.805768 containerd[1908]: time="2025-02-13T19:33:27.805574642Z" level=info msg="CreateContainer within sandbox \"64d91f6c089be8e2b2a9032d08841f9442b6b473223797915f4da75a4df46c2a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:1,}" Feb 13 19:33:27.847784 containerd[1908]: time="2025-02-13T19:33:27.847644905Z" level=info msg="CreateContainer within sandbox \"64d91f6c089be8e2b2a9032d08841f9442b6b473223797915f4da75a4df46c2a\" for &ContainerMetadata{Name:cilium-operator,Attempt:1,} returns container id \"18df4e054724f37c18575f2bafe94e832528b4c4fbe7c9921a740678630e8630\"" Feb 13 19:33:27.849664 containerd[1908]: time="2025-02-13T19:33:27.849622816Z" level=info msg="StartContainer for \"18df4e054724f37c18575f2bafe94e832528b4c4fbe7c9921a740678630e8630\"" Feb 13 19:33:27.922149 systemd[1]: Started cri-containerd-18df4e054724f37c18575f2bafe94e832528b4c4fbe7c9921a740678630e8630.scope - libcontainer container 18df4e054724f37c18575f2bafe94e832528b4c4fbe7c9921a740678630e8630. Feb 13 19:33:27.965017 containerd[1908]: time="2025-02-13T19:33:27.964969600Z" level=info msg="StartContainer for \"18df4e054724f37c18575f2bafe94e832528b4c4fbe7c9921a740678630e8630\" returns successfully"