Feb 13 20:11:45.103346 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 18:03:41 -00 2025 Feb 13 20:11:45.103388 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:11:45.103404 kernel: BIOS-provided physical RAM map: Feb 13 20:11:45.103416 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 20:11:45.103428 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 20:11:45.103440 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 20:11:45.103457 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Feb 13 20:11:45.103470 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Feb 13 20:11:45.103482 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Feb 13 20:11:45.103495 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 20:11:45.103508 kernel: NX (Execute Disable) protection: active Feb 13 20:11:45.103520 kernel: APIC: Static calls initialized Feb 13 20:11:45.103532 kernel: SMBIOS 2.7 present. Feb 13 20:11:45.103545 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Feb 13 20:11:45.103564 kernel: Hypervisor detected: KVM Feb 13 20:11:45.103578 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 20:11:45.103592 kernel: kvm-clock: using sched offset of 6241942648 cycles Feb 13 20:11:45.103606 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 20:11:45.103621 kernel: tsc: Detected 2499.996 MHz processor Feb 13 20:11:45.105193 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 20:11:45.105214 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 20:11:45.105236 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Feb 13 20:11:45.105251 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 20:11:45.105265 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 20:11:45.105280 kernel: Using GB pages for direct mapping Feb 13 20:11:45.105294 kernel: ACPI: Early table checksum verification disabled Feb 13 20:11:45.105308 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Feb 13 20:11:45.105323 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Feb 13 20:11:45.105337 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 20:11:45.105351 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Feb 13 20:11:45.105369 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Feb 13 20:11:45.105383 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 13 20:11:45.105398 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 20:11:45.105412 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Feb 13 20:11:45.105426 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 20:11:45.105440 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Feb 13 20:11:45.105454 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Feb 13 20:11:45.105468 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 13 20:11:45.105482 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Feb 13 20:11:45.105500 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Feb 13 20:11:45.105520 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Feb 13 20:11:45.105535 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Feb 13 20:11:45.105550 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Feb 13 20:11:45.105565 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Feb 13 20:11:45.105583 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Feb 13 20:11:45.105599 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Feb 13 20:11:45.105614 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Feb 13 20:11:45.105649 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Feb 13 20:11:45.105665 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 20:11:45.105680 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 20:11:45.105695 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Feb 13 20:11:45.105710 kernel: NUMA: Initialized distance table, cnt=1 Feb 13 20:11:45.105725 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Feb 13 20:11:45.105744 kernel: Zone ranges: Feb 13 20:11:45.105759 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 20:11:45.105775 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Feb 13 20:11:45.105790 kernel: Normal empty Feb 13 20:11:45.105806 kernel: Movable zone start for each node Feb 13 20:11:45.105821 kernel: Early memory node ranges Feb 13 20:11:45.105836 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 20:11:45.105851 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Feb 13 20:11:45.105866 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Feb 13 20:11:45.105881 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 20:11:45.105900 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 20:11:45.105915 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Feb 13 20:11:45.105930 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 13 20:11:45.105945 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 20:11:45.105960 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Feb 13 20:11:45.105978 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 20:11:45.105993 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 20:11:45.106008 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 20:11:45.106023 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 20:11:45.106042 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 20:11:45.106057 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 20:11:45.106072 kernel: TSC deadline timer available Feb 13 20:11:45.106087 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 20:11:45.106103 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 20:11:45.106118 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Feb 13 20:11:45.106133 kernel: Booting paravirtualized kernel on KVM Feb 13 20:11:45.106148 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 20:11:45.106164 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 20:11:45.106183 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 20:11:45.106198 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 20:11:45.106213 kernel: pcpu-alloc: [0] 0 1 Feb 13 20:11:45.106226 kernel: kvm-guest: PV spinlocks enabled Feb 13 20:11:45.106239 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 20:11:45.106255 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:11:45.106270 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:11:45.106284 kernel: random: crng init done Feb 13 20:11:45.106302 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:11:45.106316 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 20:11:45.106331 kernel: Fallback order for Node 0: 0 Feb 13 20:11:45.106346 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Feb 13 20:11:45.106360 kernel: Policy zone: DMA32 Feb 13 20:11:45.106374 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:11:45.106390 kernel: Memory: 1932344K/2057760K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42840K init, 2352K bss, 125156K reserved, 0K cma-reserved) Feb 13 20:11:45.106404 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 20:11:45.106418 kernel: Kernel/User page tables isolation: enabled Feb 13 20:11:45.106436 kernel: ftrace: allocating 37921 entries in 149 pages Feb 13 20:11:45.106450 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 20:11:45.106465 kernel: Dynamic Preempt: voluntary Feb 13 20:11:45.106480 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:11:45.106496 kernel: rcu: RCU event tracing is enabled. Feb 13 20:11:45.106511 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 20:11:45.106526 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:11:45.106540 kernel: Rude variant of Tasks RCU enabled. Feb 13 20:11:45.106555 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:11:45.106572 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:11:45.106587 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 20:11:45.106602 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 20:11:45.106616 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:11:45.107733 kernel: Console: colour VGA+ 80x25 Feb 13 20:11:45.107759 kernel: printk: console [ttyS0] enabled Feb 13 20:11:45.107775 kernel: ACPI: Core revision 20230628 Feb 13 20:11:45.107790 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Feb 13 20:11:45.107805 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 20:11:45.107826 kernel: x2apic enabled Feb 13 20:11:45.107841 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 20:11:45.107866 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Feb 13 20:11:45.107885 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Feb 13 20:11:45.107900 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 20:11:45.107915 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 20:11:45.107931 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 20:11:45.107946 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 20:11:45.107961 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 20:11:45.107975 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 20:11:45.107991 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 13 20:11:45.108006 kernel: RETBleed: Vulnerable Feb 13 20:11:45.108023 kernel: Speculative Store Bypass: Vulnerable Feb 13 20:11:45.108038 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 20:11:45.108053 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 20:11:45.108067 kernel: GDS: Unknown: Dependent on hypervisor status Feb 13 20:11:45.108082 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 20:11:45.108190 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 20:11:45.108206 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 20:11:45.108225 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 13 20:11:45.108240 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 13 20:11:45.108255 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 13 20:11:45.108269 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 13 20:11:45.108284 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 13 20:11:45.108299 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Feb 13 20:11:45.108313 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 20:11:45.108328 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 13 20:11:45.108343 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 13 20:11:45.108358 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Feb 13 20:11:45.108375 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Feb 13 20:11:45.108390 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Feb 13 20:11:45.108404 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Feb 13 20:11:45.108419 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Feb 13 20:11:45.108434 kernel: Freeing SMP alternatives memory: 32K Feb 13 20:11:45.108449 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:11:45.108463 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:11:45.108478 kernel: landlock: Up and running. Feb 13 20:11:45.108493 kernel: SELinux: Initializing. Feb 13 20:11:45.108508 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 20:11:45.108522 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 20:11:45.108537 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 13 20:11:45.108555 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:11:45.108570 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:11:45.108585 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:11:45.108600 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 13 20:11:45.108615 kernel: signal: max sigframe size: 3632 Feb 13 20:11:45.109269 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:11:45.109364 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:11:45.109379 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 20:11:45.109393 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:11:45.109412 kernel: smpboot: x86: Booting SMP configuration: Feb 13 20:11:45.109425 kernel: .... node #0, CPUs: #1 Feb 13 20:11:45.109441 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 13 20:11:45.109455 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 20:11:45.109468 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 20:11:45.109482 kernel: smpboot: Max logical packages: 1 Feb 13 20:11:45.109495 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Feb 13 20:11:45.109508 kernel: devtmpfs: initialized Feb 13 20:11:45.109524 kernel: x86/mm: Memory block size: 128MB Feb 13 20:11:45.109537 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:11:45.109559 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 20:11:45.109572 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:11:45.109586 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:11:45.109600 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:11:45.109613 kernel: audit: type=2000 audit(1739477504.254:1): state=initialized audit_enabled=0 res=1 Feb 13 20:11:45.109691 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:11:45.109705 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 20:11:45.109722 kernel: cpuidle: using governor menu Feb 13 20:11:45.109735 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:11:45.109748 kernel: dca service started, version 1.12.1 Feb 13 20:11:45.109762 kernel: PCI: Using configuration type 1 for base access Feb 13 20:11:45.109781 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 20:11:45.109795 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:11:45.109809 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:11:45.109822 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:11:45.109834 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:11:45.109851 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:11:45.109865 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:11:45.109878 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:11:45.109891 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:11:45.109905 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 13 20:11:45.109918 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 20:11:45.109931 kernel: ACPI: Interpreter enabled Feb 13 20:11:45.109944 kernel: ACPI: PM: (supports S0 S5) Feb 13 20:11:45.110036 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 20:11:45.110055 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 20:11:45.110068 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 20:11:45.110081 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 13 20:11:45.110104 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 20:11:45.110337 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:11:45.110477 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 20:11:45.110776 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 20:11:45.110799 kernel: acpiphp: Slot [3] registered Feb 13 20:11:45.110827 kernel: acpiphp: Slot [4] registered Feb 13 20:11:45.110842 kernel: acpiphp: Slot [5] registered Feb 13 20:11:45.110857 kernel: acpiphp: Slot [6] registered Feb 13 20:11:45.110872 kernel: acpiphp: Slot [7] registered Feb 13 20:11:45.110887 kernel: acpiphp: Slot [8] registered Feb 13 20:11:45.110901 kernel: acpiphp: Slot [9] registered Feb 13 20:11:45.110916 kernel: acpiphp: Slot [10] registered Feb 13 20:11:45.110931 kernel: acpiphp: Slot [11] registered Feb 13 20:11:45.110946 kernel: acpiphp: Slot [12] registered Feb 13 20:11:45.110964 kernel: acpiphp: Slot [13] registered Feb 13 20:11:45.110979 kernel: acpiphp: Slot [14] registered Feb 13 20:11:45.110993 kernel: acpiphp: Slot [15] registered Feb 13 20:11:45.111008 kernel: acpiphp: Slot [16] registered Feb 13 20:11:45.111023 kernel: acpiphp: Slot [17] registered Feb 13 20:11:45.111038 kernel: acpiphp: Slot [18] registered Feb 13 20:11:45.111054 kernel: acpiphp: Slot [19] registered Feb 13 20:11:45.111069 kernel: acpiphp: Slot [20] registered Feb 13 20:11:45.111084 kernel: acpiphp: Slot [21] registered Feb 13 20:11:45.111101 kernel: acpiphp: Slot [22] registered Feb 13 20:11:45.111115 kernel: acpiphp: Slot [23] registered Feb 13 20:11:45.111134 kernel: acpiphp: Slot [24] registered Feb 13 20:11:45.111152 kernel: acpiphp: Slot [25] registered Feb 13 20:11:45.111180 kernel: acpiphp: Slot [26] registered Feb 13 20:11:45.111201 kernel: acpiphp: Slot [27] registered Feb 13 20:11:45.111217 kernel: acpiphp: Slot [28] registered Feb 13 20:11:45.111233 kernel: acpiphp: Slot [29] registered Feb 13 20:11:45.111249 kernel: acpiphp: Slot [30] registered Feb 13 20:11:45.111265 kernel: acpiphp: Slot [31] registered Feb 13 20:11:45.111284 kernel: PCI host bridge to bus 0000:00 Feb 13 20:11:45.111508 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 20:11:45.111649 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 20:11:45.111851 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 20:11:45.111969 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 13 20:11:45.112084 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 20:11:45.112231 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 20:11:45.112378 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 13 20:11:45.112516 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Feb 13 20:11:45.114560 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 13 20:11:45.115013 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 13 20:11:45.115159 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Feb 13 20:11:45.115296 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Feb 13 20:11:45.116075 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Feb 13 20:11:45.116234 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Feb 13 20:11:45.116365 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Feb 13 20:11:45.116493 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Feb 13 20:11:45.116618 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x180 took 10742 usecs Feb 13 20:11:45.116854 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Feb 13 20:11:45.116980 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Feb 13 20:11:45.117115 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 13 20:11:45.117294 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 20:11:45.117441 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 20:11:45.117582 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Feb 13 20:11:45.117802 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 20:11:45.117934 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Feb 13 20:11:45.117954 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 20:11:45.117976 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 20:11:45.117993 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 20:11:45.118008 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 20:11:45.118024 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 20:11:45.118040 kernel: iommu: Default domain type: Translated Feb 13 20:11:45.118055 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 20:11:45.118072 kernel: PCI: Using ACPI for IRQ routing Feb 13 20:11:45.118087 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 20:11:45.118103 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 20:11:45.118122 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Feb 13 20:11:45.118248 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Feb 13 20:11:45.118374 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Feb 13 20:11:45.118515 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 20:11:45.118534 kernel: vgaarb: loaded Feb 13 20:11:45.118551 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Feb 13 20:11:45.118569 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Feb 13 20:11:45.118583 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 20:11:45.118598 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:11:45.118620 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:11:45.121688 kernel: pnp: PnP ACPI init Feb 13 20:11:45.121708 kernel: pnp: PnP ACPI: found 5 devices Feb 13 20:11:45.121725 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 20:11:45.121743 kernel: NET: Registered PF_INET protocol family Feb 13 20:11:45.121760 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 20:11:45.121776 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 20:11:45.121793 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:11:45.121814 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 20:11:45.121832 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 20:11:45.121848 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 20:11:45.121865 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 20:11:45.121882 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 20:11:45.121898 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:11:45.121915 kernel: NET: Registered PF_XDP protocol family Feb 13 20:11:45.122080 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 20:11:45.122207 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 20:11:45.122334 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 20:11:45.122454 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 13 20:11:45.122598 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 20:11:45.122620 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:11:45.124676 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 20:11:45.124695 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Feb 13 20:11:45.124713 kernel: clocksource: Switched to clocksource tsc Feb 13 20:11:45.124730 kernel: Initialise system trusted keyrings Feb 13 20:11:45.124752 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 20:11:45.124769 kernel: Key type asymmetric registered Feb 13 20:11:45.124786 kernel: Asymmetric key parser 'x509' registered Feb 13 20:11:45.124802 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 20:11:45.124819 kernel: io scheduler mq-deadline registered Feb 13 20:11:45.124835 kernel: io scheduler kyber registered Feb 13 20:11:45.124852 kernel: io scheduler bfq registered Feb 13 20:11:45.124869 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 20:11:45.124885 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:11:45.124905 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 20:11:45.124922 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 20:11:45.124938 kernel: i8042: Warning: Keylock active Feb 13 20:11:45.124954 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 20:11:45.124970 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 20:11:45.125150 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 13 20:11:45.125282 kernel: rtc_cmos 00:00: registered as rtc0 Feb 13 20:11:45.125409 kernel: rtc_cmos 00:00: setting system clock to 2025-02-13T20:11:44 UTC (1739477504) Feb 13 20:11:45.125538 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 13 20:11:45.125559 kernel: intel_pstate: CPU model not supported Feb 13 20:11:45.125576 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:11:45.125593 kernel: Segment Routing with IPv6 Feb 13 20:11:45.125609 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:11:45.125638 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:11:45.127762 kernel: Key type dns_resolver registered Feb 13 20:11:45.127782 kernel: IPI shorthand broadcast: enabled Feb 13 20:11:45.127799 kernel: sched_clock: Marking stable (688002528, 279981087)->(1058526407, -90542792) Feb 13 20:11:45.127822 kernel: registered taskstats version 1 Feb 13 20:11:45.127840 kernel: Loading compiled-in X.509 certificates Feb 13 20:11:45.127857 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6e17590ca2768b672aa48f3e0cedc4061febfe93' Feb 13 20:11:45.127873 kernel: Key type .fscrypt registered Feb 13 20:11:45.127890 kernel: Key type fscrypt-provisioning registered Feb 13 20:11:45.127907 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:11:45.127924 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:11:45.127940 kernel: ima: No architecture policies found Feb 13 20:11:45.127957 kernel: clk: Disabling unused clocks Feb 13 20:11:45.127977 kernel: Freeing unused kernel image (initmem) memory: 42840K Feb 13 20:11:45.127994 kernel: Write protecting the kernel read-only data: 36864k Feb 13 20:11:45.128011 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Feb 13 20:11:45.128028 kernel: Run /init as init process Feb 13 20:11:45.128045 kernel: with arguments: Feb 13 20:11:45.128061 kernel: /init Feb 13 20:11:45.128077 kernel: with environment: Feb 13 20:11:45.128093 kernel: HOME=/ Feb 13 20:11:45.128109 kernel: TERM=linux Feb 13 20:11:45.128129 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:11:45.128175 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:11:45.128197 systemd[1]: Detected virtualization amazon. Feb 13 20:11:45.128216 systemd[1]: Detected architecture x86-64. Feb 13 20:11:45.128234 systemd[1]: Running in initrd. Feb 13 20:11:45.128252 systemd[1]: No hostname configured, using default hostname. Feb 13 20:11:45.128269 systemd[1]: Hostname set to <localhost>. Feb 13 20:11:45.128290 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:11:45.128308 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:11:45.128326 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:11:45.128345 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:11:45.128365 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:11:45.128383 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:11:45.128401 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:11:45.128420 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:11:45.128444 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:11:45.128463 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:11:45.128481 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:11:45.128500 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:11:45.128518 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:11:45.128537 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:11:45.128559 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:11:45.128577 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:11:45.128595 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:11:45.128614 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:11:45.130674 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:11:45.130711 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:11:45.130730 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:11:45.130750 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:11:45.130769 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:11:45.130792 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:11:45.130810 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:11:45.130836 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 20:11:45.130856 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:11:45.130876 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:11:45.130901 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:11:45.130920 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:11:45.130939 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:11:45.130959 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:11:45.130985 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:11:45.131131 systemd-journald[177]: Collecting audit messages is disabled. Feb 13 20:11:45.131182 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:11:45.131201 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:11:45.131220 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:11:45.131245 systemd-journald[177]: Journal started Feb 13 20:11:45.131284 systemd-journald[177]: Runtime Journal (/run/log/journal/ec2a4f41aea99153b2dd726dba22b8bf) is 4.8M, max 38.6M, 33.7M free. Feb 13 20:11:45.114425 systemd-modules-load[179]: Inserted module 'overlay' Feb 13 20:11:45.138761 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:11:45.159663 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:11:45.162679 kernel: Bridge firewalling registered Feb 13 20:11:45.160942 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:11:45.161910 systemd-modules-load[179]: Inserted module 'br_netfilter' Feb 13 20:11:45.291386 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:11:45.300474 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:11:45.311045 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:11:45.328876 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:11:45.332543 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:11:45.353595 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:11:45.354214 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:11:45.374059 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:11:45.383124 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:11:45.388169 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:11:45.397271 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:11:45.409951 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:11:45.440523 dracut-cmdline[215]: dracut-dracut-053 Feb 13 20:11:45.445851 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:11:45.445851 systemd-resolved[211]: Positive Trust Anchors: Feb 13 20:11:45.445861 systemd-resolved[211]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:11:45.445911 systemd-resolved[211]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:11:45.471622 systemd-resolved[211]: Defaulting to hostname 'linux'. Feb 13 20:11:45.474432 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:11:45.476861 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:11:45.548796 kernel: SCSI subsystem initialized Feb 13 20:11:45.559659 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:11:45.572660 kernel: iscsi: registered transport (tcp) Feb 13 20:11:45.600757 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:11:45.600836 kernel: QLogic iSCSI HBA Driver Feb 13 20:11:45.645531 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:11:45.653854 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:11:45.688897 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:11:45.688982 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:11:45.689003 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:11:45.735666 kernel: raid6: avx512x4 gen() 14399 MB/s Feb 13 20:11:45.752662 kernel: raid6: avx512x2 gen() 13060 MB/s Feb 13 20:11:45.769681 kernel: raid6: avx512x1 gen() 15019 MB/s Feb 13 20:11:45.786662 kernel: raid6: avx2x4 gen() 15355 MB/s Feb 13 20:11:45.803660 kernel: raid6: avx2x2 gen() 15416 MB/s Feb 13 20:11:45.821853 kernel: raid6: avx2x1 gen() 11201 MB/s Feb 13 20:11:45.821927 kernel: raid6: using algorithm avx2x2 gen() 15416 MB/s Feb 13 20:11:45.839659 kernel: raid6: .... xor() 13833 MB/s, rmw enabled Feb 13 20:11:45.839734 kernel: raid6: using avx512x2 recovery algorithm Feb 13 20:11:45.863658 kernel: xor: automatically using best checksumming function avx Feb 13 20:11:46.266654 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:11:46.282170 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:11:46.295098 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:11:46.325795 systemd-udevd[397]: Using default interface naming scheme 'v255'. Feb 13 20:11:46.333796 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:11:46.347943 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:11:46.384258 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Feb 13 20:11:46.435784 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:11:46.445986 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:11:46.596271 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:11:46.605930 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:11:46.639576 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:11:46.644228 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:11:46.648738 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:11:46.653267 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:11:46.660860 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:11:46.693702 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:11:46.718855 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 20:11:46.749702 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:11:46.758763 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 20:11:46.758811 kernel: AES CTR mode by8 optimization enabled Feb 13 20:11:46.749884 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:11:46.753465 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:11:46.756465 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:11:46.761213 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:11:46.770720 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:11:46.777289 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 20:11:46.802368 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 20:11:46.802620 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Feb 13 20:11:46.802821 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 20:11:46.803004 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:10:0b:27:c8:15 Feb 13 20:11:46.803170 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 13 20:11:46.782037 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:11:46.821683 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 20:11:46.826662 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:11:46.826730 kernel: GPT:9289727 != 16777215 Feb 13 20:11:46.826749 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:11:46.826774 kernel: GPT:9289727 != 16777215 Feb 13 20:11:46.826797 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:11:46.826814 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 20:11:46.832765 (udev-worker)[457]: Network interface NamePolicy= disabled on kernel command line. Feb 13 20:11:47.011272 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (449) Feb 13 20:11:47.011320 kernel: BTRFS: device fsid 892c7470-7713-4b0f-880a-4c5f7bf5b72d devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (458) Feb 13 20:11:46.997795 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 20:11:47.015953 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:11:47.051935 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 20:11:47.063375 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 20:11:47.069116 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 20:11:47.069254 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 20:11:47.078815 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:11:47.082972 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:11:47.087145 disk-uuid[618]: Primary Header is updated. Feb 13 20:11:47.087145 disk-uuid[618]: Secondary Entries is updated. Feb 13 20:11:47.087145 disk-uuid[618]: Secondary Header is updated. Feb 13 20:11:47.094666 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 20:11:47.109767 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 20:11:47.127071 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:11:48.112652 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 20:11:48.115955 disk-uuid[620]: The operation has completed successfully. Feb 13 20:11:48.374998 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:11:48.375127 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:11:48.404161 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:11:48.412737 sh[886]: Success Feb 13 20:11:48.431661 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 20:11:48.560218 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:11:48.573798 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:11:48.579969 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:11:48.627220 kernel: BTRFS info (device dm-0): first mount of filesystem 892c7470-7713-4b0f-880a-4c5f7bf5b72d Feb 13 20:11:48.627304 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:11:48.627325 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:11:48.627343 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:11:48.630604 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:11:48.672658 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 20:11:48.678523 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:11:48.683341 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:11:48.695317 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:11:48.706889 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:11:48.756112 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:11:48.756183 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:11:48.756203 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 20:11:48.770515 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 20:11:48.816203 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:11:48.823969 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:11:48.840536 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:11:48.850169 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:11:48.917432 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:11:48.931067 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:11:49.072890 systemd-networkd[1078]: lo: Link UP Feb 13 20:11:49.075901 systemd-networkd[1078]: lo: Gained carrier Feb 13 20:11:49.079398 systemd-networkd[1078]: Enumeration completed Feb 13 20:11:49.079995 systemd-networkd[1078]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:11:49.080000 systemd-networkd[1078]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:11:49.081088 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:11:49.098911 systemd[1]: Reached target network.target - Network. Feb 13 20:11:49.103997 ignition[1012]: Ignition 2.19.0 Feb 13 20:11:49.104642 ignition[1012]: Stage: fetch-offline Feb 13 20:11:49.104320 systemd-networkd[1078]: eth0: Link UP Feb 13 20:11:49.106577 ignition[1012]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:11:49.104326 systemd-networkd[1078]: eth0: Gained carrier Feb 13 20:11:49.106592 ignition[1012]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 20:11:49.104415 systemd-networkd[1078]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:11:49.107293 ignition[1012]: Ignition finished successfully Feb 13 20:11:49.132223 systemd-networkd[1078]: eth0: DHCPv4 address 172.31.17.230/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 20:11:49.132806 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:11:49.140005 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 20:11:49.167274 ignition[1087]: Ignition 2.19.0 Feb 13 20:11:49.167289 ignition[1087]: Stage: fetch Feb 13 20:11:49.167886 ignition[1087]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:11:49.167900 ignition[1087]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 20:11:49.168135 ignition[1087]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 20:11:49.224904 ignition[1087]: PUT result: OK Feb 13 20:11:49.229019 ignition[1087]: parsed url from cmdline: "" Feb 13 20:11:49.229030 ignition[1087]: no config URL provided Feb 13 20:11:49.229040 ignition[1087]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:11:49.229054 ignition[1087]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:11:49.229078 ignition[1087]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 20:11:49.230405 ignition[1087]: PUT result: OK Feb 13 20:11:49.230503 ignition[1087]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 20:11:49.233461 ignition[1087]: GET result: OK Feb 13 20:11:49.233588 ignition[1087]: parsing config with SHA512: 28640677c7dc9f8490229b54c96f623a0d251e9df3b470db365bea3979f7accd1638208a68b8de63aaad3fc0c3660eb9a8a3156190c58259b965189b16de74d9 Feb 13 20:11:49.239598 unknown[1087]: fetched base config from "system" Feb 13 20:11:49.239614 unknown[1087]: fetched base config from "system" Feb 13 20:11:49.239622 unknown[1087]: fetched user config from "aws" Feb 13 20:11:49.241516 ignition[1087]: fetch: fetch complete Feb 13 20:11:49.241521 ignition[1087]: fetch: fetch passed Feb 13 20:11:49.241585 ignition[1087]: Ignition finished successfully Feb 13 20:11:49.245951 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 20:11:49.253112 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:11:49.284476 ignition[1093]: Ignition 2.19.0 Feb 13 20:11:49.284493 ignition[1093]: Stage: kargs Feb 13 20:11:49.285172 ignition[1093]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:11:49.285186 ignition[1093]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 20:11:49.285293 ignition[1093]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 20:11:49.286565 ignition[1093]: PUT result: OK Feb 13 20:11:49.293008 ignition[1093]: kargs: kargs passed Feb 13 20:11:49.293614 ignition[1093]: Ignition finished successfully Feb 13 20:11:49.297467 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:11:49.305995 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:11:49.361548 ignition[1099]: Ignition 2.19.0 Feb 13 20:11:49.361563 ignition[1099]: Stage: disks Feb 13 20:11:49.362555 ignition[1099]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:11:49.362572 ignition[1099]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 20:11:49.362830 ignition[1099]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 20:11:49.366180 ignition[1099]: PUT result: OK Feb 13 20:11:49.372275 ignition[1099]: disks: disks passed Feb 13 20:11:49.372340 ignition[1099]: Ignition finished successfully Feb 13 20:11:49.374903 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:11:49.377604 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:11:49.379252 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:11:49.381489 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:11:49.384282 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:11:49.386751 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:11:49.406946 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:11:49.435443 systemd-fsck[1107]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 20:11:49.441336 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:11:49.454074 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:11:49.621695 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 85215ce4-0be3-4782-863e-8dde129924f0 r/w with ordered data mode. Quota mode: none. Feb 13 20:11:49.628397 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:11:49.629194 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:11:49.638782 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:11:49.644867 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:11:49.647210 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 20:11:49.647274 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:11:49.647350 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:11:49.658752 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:11:49.661257 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:11:49.666652 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1126) Feb 13 20:11:49.670240 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:11:49.670295 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:11:49.670309 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 20:11:49.683659 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 20:11:49.687283 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:11:49.766815 initrd-setup-root[1150]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:11:49.774735 initrd-setup-root[1157]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:11:49.780998 initrd-setup-root[1164]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:11:49.788037 initrd-setup-root[1171]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:11:49.928430 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:11:49.940809 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:11:49.946988 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:11:49.955109 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:11:49.956375 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:11:49.994252 ignition[1243]: INFO : Ignition 2.19.0 Feb 13 20:11:49.995955 ignition[1243]: INFO : Stage: mount Feb 13 20:11:49.997066 ignition[1243]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:11:49.997066 ignition[1243]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 20:11:49.997066 ignition[1243]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 20:11:50.002759 ignition[1243]: INFO : PUT result: OK Feb 13 20:11:50.004398 ignition[1243]: INFO : mount: mount passed Feb 13 20:11:50.005479 ignition[1243]: INFO : Ignition finished successfully Feb 13 20:11:50.008977 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:11:50.017800 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:11:50.023864 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:11:50.039033 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:11:50.073142 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1256) Feb 13 20:11:50.073204 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:11:50.073291 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:11:50.075190 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 20:11:50.079652 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 20:11:50.082701 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:11:50.111670 ignition[1273]: INFO : Ignition 2.19.0 Feb 13 20:11:50.111670 ignition[1273]: INFO : Stage: files Feb 13 20:11:50.111670 ignition[1273]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:11:50.111670 ignition[1273]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 20:11:50.118888 ignition[1273]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 20:11:50.118888 ignition[1273]: INFO : PUT result: OK Feb 13 20:11:50.124851 ignition[1273]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:11:50.126169 ignition[1273]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:11:50.126169 ignition[1273]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:11:50.131877 ignition[1273]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:11:50.133699 ignition[1273]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:11:50.135283 ignition[1273]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:11:50.134574 unknown[1273]: wrote ssh authorized keys file for user: core Feb 13 20:11:50.138809 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 20:11:50.138809 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 20:11:50.138809 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:11:50.138809 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:11:50.138809 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:11:50.138809 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:11:50.138809 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:11:50.138809 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:11:50.138809 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:11:50.138809 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 13 20:11:50.599798 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 13 20:11:50.707120 systemd-networkd[1078]: eth0: Gained IPv6LL Feb 13 20:11:51.011132 ignition[1273]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:11:51.011132 ignition[1273]: INFO : files: op(8): [started] processing unit "containerd.service" Feb 13 20:11:51.016891 ignition[1273]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 20:11:51.020394 ignition[1273]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 20:11:51.020394 ignition[1273]: INFO : files: op(8): [finished] processing unit "containerd.service" Feb 13 20:11:51.025200 ignition[1273]: INFO : files: createResultFile: createFiles: op(a): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:11:51.027559 ignition[1273]: INFO : files: createResultFile: createFiles: op(a): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:11:51.030101 ignition[1273]: INFO : files: files passed Feb 13 20:11:51.030101 ignition[1273]: INFO : Ignition finished successfully Feb 13 20:11:51.033408 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:11:51.040405 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:11:51.054948 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:11:51.064108 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:11:51.064264 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:11:51.076822 initrd-setup-root-after-ignition[1301]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:11:51.076822 initrd-setup-root-after-ignition[1301]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:11:51.087780 initrd-setup-root-after-ignition[1305]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:11:51.097970 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:11:51.120307 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:11:51.128820 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:11:51.162336 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:11:51.162471 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:11:51.165782 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:11:51.169734 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:11:51.170057 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:11:51.180162 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:11:51.216885 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:11:51.227178 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:11:51.246439 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:11:51.249566 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:11:51.251880 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:11:51.253494 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:11:51.253616 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:11:51.261656 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:11:51.265828 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:11:51.266676 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:11:51.267132 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:11:51.267459 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:11:51.268081 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:11:51.268697 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:11:51.269413 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:11:51.269923 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:11:51.270547 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:11:51.271005 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:11:51.271200 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:11:51.272098 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:11:51.272605 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:11:51.273052 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:11:51.281732 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:11:51.284910 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:11:51.285058 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:11:51.311466 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:11:51.311746 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:11:51.315517 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:11:51.315987 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:11:51.328026 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:11:51.334076 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:11:51.342949 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:11:51.343315 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:11:51.347906 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:11:51.348023 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:11:51.363206 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:11:51.363340 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:11:51.378438 ignition[1325]: INFO : Ignition 2.19.0 Feb 13 20:11:51.378438 ignition[1325]: INFO : Stage: umount Feb 13 20:11:51.384606 ignition[1325]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:11:51.384606 ignition[1325]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 20:11:51.384606 ignition[1325]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 20:11:51.390806 ignition[1325]: INFO : PUT result: OK Feb 13 20:11:51.397362 ignition[1325]: INFO : umount: umount passed Feb 13 20:11:51.401523 ignition[1325]: INFO : Ignition finished successfully Feb 13 20:11:51.403639 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:11:51.403876 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:11:51.408099 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:11:51.408345 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:11:51.411126 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:11:51.411326 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:11:51.419708 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 20:11:51.419791 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 20:11:51.423049 systemd[1]: Stopped target network.target - Network. Feb 13 20:11:51.426321 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:11:51.426411 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:11:51.431753 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:11:51.433860 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:11:51.436238 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:11:51.438173 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:11:51.439760 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:11:51.440932 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:11:51.441065 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:11:51.452514 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:11:51.452581 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:11:51.454332 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:11:51.454415 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:11:51.460700 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:11:51.460787 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:11:51.473186 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:11:51.474391 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:11:51.489913 systemd-networkd[1078]: eth0: DHCPv6 lease lost Feb 13 20:11:51.494536 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:11:51.499086 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:11:51.499296 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:11:51.505224 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:11:51.505682 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:11:51.508694 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:11:51.508826 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:11:51.519398 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:11:51.519471 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:11:51.519988 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:11:51.520059 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:11:51.533949 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:11:51.535443 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:11:51.535752 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:11:51.537292 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:11:51.537349 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:11:51.539832 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:11:51.540092 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:11:51.541337 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:11:51.541398 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:11:51.554370 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:11:51.583954 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:11:51.584344 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:11:51.588938 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:11:51.589029 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:11:51.596386 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:11:51.596455 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:11:51.597914 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:11:51.597953 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:11:51.601883 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:11:51.603066 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:11:51.608661 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:11:51.608729 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:11:51.614101 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:11:51.614174 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:11:51.626849 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:11:51.628147 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:11:51.628208 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:11:51.629617 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 20:11:51.629673 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:11:51.631033 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:11:51.631078 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:11:51.633682 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:11:51.633744 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:11:51.643532 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:11:51.643639 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:11:51.647746 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:11:51.658069 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:11:51.682960 systemd[1]: Switching root. Feb 13 20:11:51.717765 systemd-journald[177]: Journal stopped Feb 13 20:11:53.490734 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Feb 13 20:11:53.490834 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:11:53.490859 kernel: SELinux: policy capability open_perms=1 Feb 13 20:11:53.490881 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:11:53.490901 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:11:53.490922 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:11:53.490942 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:11:53.490962 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:11:53.490987 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:11:53.491005 kernel: audit: type=1403 audit(1739477512.133:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:11:53.491035 systemd[1]: Successfully loaded SELinux policy in 52.065ms. Feb 13 20:11:53.491068 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.006ms. Feb 13 20:11:53.491092 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:11:53.491114 systemd[1]: Detected virtualization amazon. Feb 13 20:11:53.491135 systemd[1]: Detected architecture x86-64. Feb 13 20:11:53.491156 systemd[1]: Detected first boot. Feb 13 20:11:53.491178 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:11:53.491203 zram_generator::config[1385]: No configuration found. Feb 13 20:11:53.491227 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:11:53.491249 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:11:53.491270 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 20:11:53.491293 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:11:53.491319 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:11:53.491341 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:11:53.491362 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:11:53.491386 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:11:53.491409 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:11:53.491430 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:11:53.491451 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:11:53.491473 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:11:53.491495 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:11:53.491516 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:11:53.491538 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:11:53.491560 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:11:53.491585 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:11:53.491606 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 20:11:53.511936 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:11:53.511984 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:11:53.512006 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:11:53.512026 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:11:53.512047 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:11:53.512067 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:11:53.512096 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:11:53.512116 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:11:53.512136 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:11:53.512156 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:11:53.512175 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:11:53.512195 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:11:53.512215 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:11:53.512234 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:11:53.512255 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:11:53.512278 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:11:53.512299 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:11:53.512319 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:11:53.512340 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:11:53.528849 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:11:53.528915 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:11:53.528939 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:11:53.528962 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:11:53.528985 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:11:53.529017 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:11:53.529039 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:11:53.529062 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:11:53.529085 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:11:53.529108 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:11:53.529130 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:11:53.529152 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:11:53.529174 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 13 20:11:53.529199 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Feb 13 20:11:53.529220 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:11:53.529242 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:11:53.529264 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:11:53.529286 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:11:53.529308 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:11:53.529333 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:11:53.529393 systemd-journald[1486]: Collecting audit messages is disabled. Feb 13 20:11:53.529442 kernel: fuse: init (API version 7.39) Feb 13 20:11:53.529464 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:11:53.529487 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:11:53.529508 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:11:53.529530 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:11:53.529552 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:11:53.529575 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:11:53.529597 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:11:53.529622 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:11:53.542483 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:11:53.542514 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:11:53.542536 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:11:53.542559 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:11:53.542581 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:11:53.542615 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:11:53.542647 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:11:53.542669 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:11:53.542691 kernel: loop: module loaded Feb 13 20:11:53.542723 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:11:53.542746 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:11:53.542772 systemd-journald[1486]: Journal started Feb 13 20:11:53.543170 systemd-journald[1486]: Runtime Journal (/run/log/journal/ec2a4f41aea99153b2dd726dba22b8bf) is 4.8M, max 38.6M, 33.7M free. Feb 13 20:11:53.545678 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:11:53.553958 kernel: ACPI: bus type drm_connector registered Feb 13 20:11:53.549919 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:11:53.550177 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:11:53.557572 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:11:53.561940 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:11:53.583949 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:11:53.596750 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:11:53.602797 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:11:53.604515 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:11:53.614969 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:11:53.629816 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:11:53.631867 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:11:53.641827 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:11:53.643305 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:11:53.659787 systemd-journald[1486]: Time spent on flushing to /var/log/journal/ec2a4f41aea99153b2dd726dba22b8bf is 77.633ms for 928 entries. Feb 13 20:11:53.659787 systemd-journald[1486]: System Journal (/var/log/journal/ec2a4f41aea99153b2dd726dba22b8bf) is 8.0M, max 195.6M, 187.6M free. Feb 13 20:11:53.753036 systemd-journald[1486]: Received client request to flush runtime journal. Feb 13 20:11:53.653863 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:11:53.668859 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:11:53.673559 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:11:53.684034 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:11:53.688939 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:11:53.707578 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:11:53.711538 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:11:53.714802 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:11:53.726657 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:11:53.759601 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:11:53.782310 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:11:53.788401 udevadm[1542]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 20:11:53.792254 systemd-tmpfiles[1533]: ACLs are not supported, ignoring. Feb 13 20:11:53.792283 systemd-tmpfiles[1533]: ACLs are not supported, ignoring. Feb 13 20:11:53.804494 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:11:53.811977 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:11:53.873639 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:11:53.887204 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:11:53.916052 systemd-tmpfiles[1556]: ACLs are not supported, ignoring. Feb 13 20:11:53.916482 systemd-tmpfiles[1556]: ACLs are not supported, ignoring. Feb 13 20:11:53.927187 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:11:54.853653 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:11:54.867946 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:11:54.924211 systemd-udevd[1562]: Using default interface naming scheme 'v255'. Feb 13 20:11:54.994023 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:11:55.022079 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:11:55.064830 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:11:55.168474 (udev-worker)[1570]: Network interface NamePolicy= disabled on kernel command line. Feb 13 20:11:55.202804 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:11:55.229727 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Feb 13 20:11:55.311684 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 20:11:55.332684 kernel: ACPI: button: Power Button [PWRF] Feb 13 20:11:55.335656 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Feb 13 20:11:55.339671 kernel: ACPI: button: Sleep Button [SLPF] Feb 13 20:11:55.380656 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Feb 13 20:11:55.461866 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1570) Feb 13 20:11:55.395217 systemd-networkd[1566]: lo: Link UP Feb 13 20:11:55.395223 systemd-networkd[1566]: lo: Gained carrier Feb 13 20:11:55.400172 systemd-networkd[1566]: Enumeration completed Feb 13 20:11:55.400357 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:11:55.401356 systemd-networkd[1566]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:11:55.401361 systemd-networkd[1566]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:11:55.412004 systemd-networkd[1566]: eth0: Link UP Feb 13 20:11:55.412480 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:11:55.414254 systemd-networkd[1566]: eth0: Gained carrier Feb 13 20:11:55.414285 systemd-networkd[1566]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:11:55.426723 systemd-networkd[1566]: eth0: DHCPv4 address 172.31.17.230/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 20:11:55.462754 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:11:55.510764 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Feb 13 20:11:55.528719 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 20:11:55.771439 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 20:11:55.844467 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:11:55.846605 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:11:55.855940 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:11:55.872210 lvm[1686]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:11:55.902603 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:11:55.907230 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:11:55.915895 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:11:55.946227 lvm[1689]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:11:55.996128 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:11:56.000413 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:11:56.004336 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:11:56.004395 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:11:56.006661 systemd[1]: Reached target machines.target - Containers. Feb 13 20:11:56.010196 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 20:11:56.024312 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:11:56.032489 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:11:56.035980 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:11:56.049418 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:11:56.079025 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 20:11:56.086393 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:11:56.109740 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:11:56.167511 kernel: loop0: detected capacity change from 0 to 210664 Feb 13 20:11:56.169990 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:11:56.220524 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:11:56.226484 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 20:11:56.291838 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:11:56.327659 kernel: loop1: detected capacity change from 0 to 142488 Feb 13 20:11:56.387656 kernel: loop2: detected capacity change from 0 to 140768 Feb 13 20:11:56.472656 kernel: loop3: detected capacity change from 0 to 61336 Feb 13 20:11:56.541665 kernel: loop4: detected capacity change from 0 to 210664 Feb 13 20:11:56.576688 kernel: loop5: detected capacity change from 0 to 142488 Feb 13 20:11:56.607654 kernel: loop6: detected capacity change from 0 to 140768 Feb 13 20:11:56.638690 kernel: loop7: detected capacity change from 0 to 61336 Feb 13 20:11:56.666302 (sd-merge)[1711]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 20:11:56.667110 (sd-merge)[1711]: Merged extensions into '/usr'. Feb 13 20:11:56.673173 systemd[1]: Reloading requested from client PID 1697 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:11:56.673200 systemd[1]: Reloading... Feb 13 20:11:56.722075 systemd-networkd[1566]: eth0: Gained IPv6LL Feb 13 20:11:56.765677 zram_generator::config[1735]: No configuration found. Feb 13 20:11:57.025581 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:11:57.030663 ldconfig[1693]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:11:57.110730 systemd[1]: Reloading finished in 436 ms. Feb 13 20:11:57.131171 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:11:57.133374 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:11:57.135534 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:11:57.155012 systemd[1]: Starting ensure-sysext.service... Feb 13 20:11:57.158909 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:11:57.179785 systemd[1]: Reloading requested from client PID 1797 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:11:57.179930 systemd[1]: Reloading... Feb 13 20:11:57.217853 systemd-tmpfiles[1798]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:11:57.219285 systemd-tmpfiles[1798]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:11:57.221824 systemd-tmpfiles[1798]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:11:57.222411 systemd-tmpfiles[1798]: ACLs are not supported, ignoring. Feb 13 20:11:57.222733 systemd-tmpfiles[1798]: ACLs are not supported, ignoring. Feb 13 20:11:57.229105 systemd-tmpfiles[1798]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:11:57.229123 systemd-tmpfiles[1798]: Skipping /boot Feb 13 20:11:57.251488 systemd-tmpfiles[1798]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:11:57.251509 systemd-tmpfiles[1798]: Skipping /boot Feb 13 20:11:57.276657 zram_generator::config[1822]: No configuration found. Feb 13 20:11:57.570945 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:11:57.671853 systemd[1]: Reloading finished in 491 ms. Feb 13 20:11:57.694830 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:11:57.716279 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:11:57.729359 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:11:57.746123 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:11:57.756298 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:11:57.767512 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:11:57.803593 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:11:57.804595 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:11:57.811182 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:11:57.820167 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:11:57.851709 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:11:57.853993 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:11:57.854537 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:11:57.904372 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:11:57.904613 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:11:57.924534 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:11:57.935068 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:11:57.935333 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:11:57.942793 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:11:57.943165 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:11:57.952141 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:11:57.954524 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:11:57.956191 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:11:57.959231 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:11:57.959725 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:11:57.976345 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:11:57.985129 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:11:57.986118 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:11:58.001880 augenrules[1916]: No rules Feb 13 20:11:57.996174 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:11:58.015964 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:11:58.032992 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:11:58.035168 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:11:58.035944 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:11:58.050612 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:11:58.050773 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:11:58.055262 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:11:58.061873 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:11:58.062225 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:11:58.064416 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:11:58.071059 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:11:58.077011 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:11:58.077501 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:11:58.080465 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:11:58.082308 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:11:58.113820 systemd[1]: Finished ensure-sysext.service. Feb 13 20:11:58.117935 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:11:58.122867 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:11:58.122974 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:11:58.135866 systemd-resolved[1892]: Positive Trust Anchors: Feb 13 20:11:58.136252 systemd-resolved[1892]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:11:58.136319 systemd-resolved[1892]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:11:58.141456 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:11:58.145167 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:11:58.147238 systemd-resolved[1892]: Defaulting to hostname 'linux'. Feb 13 20:11:58.154237 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:11:58.155750 systemd[1]: Reached target network.target - Network. Feb 13 20:11:58.157212 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:11:58.159131 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:11:58.160619 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:11:58.162172 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:11:58.163722 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:11:58.165731 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:11:58.167248 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:11:58.169317 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:11:58.171327 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:11:58.171376 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:11:58.172470 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:11:58.174832 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:11:58.177894 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:11:58.181421 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:11:58.191140 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:11:58.192747 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:11:58.194338 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:11:58.201695 systemd[1]: System is tainted: cgroupsv1 Feb 13 20:11:58.202519 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:11:58.202584 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:11:58.223824 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:11:58.228041 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 20:11:58.235949 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:11:58.242177 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:11:58.250666 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:11:58.253861 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:11:58.272802 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:11:58.303091 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:11:58.318318 jq[1951]: false Feb 13 20:11:58.326898 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 20:11:58.331340 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:11:58.337881 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 20:11:58.358873 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:11:58.388527 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:11:58.407465 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:11:58.409745 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 20:11:58.417428 dbus-daemon[1950]: [system] SELinux support is enabled Feb 13 20:11:58.426404 extend-filesystems[1952]: Found loop4 Feb 13 20:11:58.426404 extend-filesystems[1952]: Found loop5 Feb 13 20:11:58.426404 extend-filesystems[1952]: Found loop6 Feb 13 20:11:58.426404 extend-filesystems[1952]: Found loop7 Feb 13 20:11:58.426404 extend-filesystems[1952]: Found nvme0n1 Feb 13 20:11:58.426404 extend-filesystems[1952]: Found nvme0n1p1 Feb 13 20:11:58.426404 extend-filesystems[1952]: Found nvme0n1p2 Feb 13 20:11:58.426404 extend-filesystems[1952]: Found nvme0n1p3 Feb 13 20:11:58.426404 extend-filesystems[1952]: Found usr Feb 13 20:11:58.426404 extend-filesystems[1952]: Found nvme0n1p4 Feb 13 20:11:58.426404 extend-filesystems[1952]: Found nvme0n1p6 Feb 13 20:11:58.426404 extend-filesystems[1952]: Found nvme0n1p7 Feb 13 20:11:58.426404 extend-filesystems[1952]: Found nvme0n1p9 Feb 13 20:11:58.426404 extend-filesystems[1952]: Checking size of /dev/nvme0n1p9 Feb 13 20:11:58.425026 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:11:58.471053 ntpd[1960]: 13 Feb 20:11:58 ntpd[1960]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:30:53 UTC 2025 (1): Starting Feb 13 20:11:58.471053 ntpd[1960]: 13 Feb 20:11:58 ntpd[1960]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 20:11:58.471053 ntpd[1960]: 13 Feb 20:11:58 ntpd[1960]: ---------------------------------------------------- Feb 13 20:11:58.471053 ntpd[1960]: 13 Feb 20:11:58 ntpd[1960]: ntp-4 is maintained by Network Time Foundation, Feb 13 20:11:58.471053 ntpd[1960]: 13 Feb 20:11:58 ntpd[1960]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 20:11:58.471053 ntpd[1960]: 13 Feb 20:11:58 ntpd[1960]: corporation. Support and training for ntp-4 are Feb 13 20:11:58.471053 ntpd[1960]: 13 Feb 20:11:58 ntpd[1960]: available at https://www.nwtime.org/support Feb 13 20:11:58.471053 ntpd[1960]: 13 Feb 20:11:58 ntpd[1960]: ---------------------------------------------------- Feb 13 20:11:58.471053 ntpd[1960]: 13 Feb 20:11:58 ntpd[1960]: proto: precision = 0.061 usec (-24) Feb 13 20:11:58.471053 ntpd[1960]: 13 Feb 20:11:58 ntpd[1960]: basedate set to 2025-02-01 Feb 13 20:11:58.471053 ntpd[1960]: 13 Feb 20:11:58 ntpd[1960]: gps base set to 2025-02-02 (week 2352) Feb 13 20:11:58.438473 dbus-daemon[1950]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1566 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 20:11:58.438793 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:11:58.440183 ntpd[1960]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:30:53 UTC 2025 (1): Starting Feb 13 20:11:58.457183 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:11:58.440208 ntpd[1960]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 20:11:58.440219 ntpd[1960]: ---------------------------------------------------- Feb 13 20:11:58.440230 ntpd[1960]: ntp-4 is maintained by Network Time Foundation, Feb 13 20:11:58.440240 ntpd[1960]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 20:11:58.440250 ntpd[1960]: corporation. Support and training for ntp-4 are Feb 13 20:11:58.480815 ntpd[1960]: 13 Feb 20:11:58 ntpd[1960]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 20:11:58.480815 ntpd[1960]: 13 Feb 20:11:58 ntpd[1960]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 20:11:58.440260 ntpd[1960]: available at https://www.nwtime.org/support Feb 13 20:11:58.440270 ntpd[1960]: ---------------------------------------------------- Feb 13 20:11:58.454060 ntpd[1960]: proto: precision = 0.061 usec (-24) Feb 13 20:11:58.454392 ntpd[1960]: basedate set to 2025-02-01 Feb 13 20:11:58.454406 ntpd[1960]: gps base set to 2025-02-02 (week 2352) Feb 13 20:11:58.479103 ntpd[1960]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 20:11:58.479194 ntpd[1960]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 20:11:58.481199 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:11:58.481595 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:11:58.486213 ntpd[1960]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 20:11:58.486290 ntpd[1960]: Listen normally on 3 eth0 172.31.17.230:123 Feb 13 20:11:58.486591 ntpd[1960]: 13 Feb 20:11:58 ntpd[1960]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 20:11:58.486591 ntpd[1960]: 13 Feb 20:11:58 ntpd[1960]: Listen normally on 3 eth0 172.31.17.230:123 Feb 13 20:11:58.486591 ntpd[1960]: 13 Feb 20:11:58 ntpd[1960]: Listen normally on 4 lo [::1]:123 Feb 13 20:11:58.486591 ntpd[1960]: 13 Feb 20:11:58 ntpd[1960]: Listen normally on 5 eth0 [fe80::410:bff:fe27:c815%2]:123 Feb 13 20:11:58.486332 ntpd[1960]: Listen normally on 4 lo [::1]:123 Feb 13 20:11:58.487958 ntpd[1960]: 13 Feb 20:11:58 ntpd[1960]: Listening on routing socket on fd #22 for interface updates Feb 13 20:11:58.486548 ntpd[1960]: Listen normally on 5 eth0 [fe80::410:bff:fe27:c815%2]:123 Feb 13 20:11:58.486601 ntpd[1960]: Listening on routing socket on fd #22 for interface updates Feb 13 20:11:58.501708 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:11:58.510587 ntpd[1960]: 13 Feb 20:11:58 ntpd[1960]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 20:11:58.510587 ntpd[1960]: 13 Feb 20:11:58 ntpd[1960]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 20:11:58.505546 ntpd[1960]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 20:11:58.502390 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:11:58.505584 ntpd[1960]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 20:11:58.511465 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:11:58.526945 jq[1978]: true Feb 13 20:11:58.511937 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:11:58.573956 coreos-metadata[1948]: Feb 13 20:11:58.572 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 20:11:58.604948 extend-filesystems[1952]: Resized partition /dev/nvme0n1p9 Feb 13 20:11:58.607797 coreos-metadata[1948]: Feb 13 20:11:58.602 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 20:11:58.613533 extend-filesystems[2002]: resize2fs 1.47.1 (20-May-2024) Feb 13 20:11:58.620795 coreos-metadata[1948]: Feb 13 20:11:58.614 INFO Fetch successful Feb 13 20:11:58.620795 coreos-metadata[1948]: Feb 13 20:11:58.614 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 20:11:58.620893 jq[1993]: true Feb 13 20:11:58.626750 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 20:11:58.647155 coreos-metadata[1948]: Feb 13 20:11:58.646 INFO Fetch successful Feb 13 20:11:58.647155 coreos-metadata[1948]: Feb 13 20:11:58.646 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 20:11:58.651317 coreos-metadata[1948]: Feb 13 20:11:58.650 INFO Fetch successful Feb 13 20:11:58.651317 coreos-metadata[1948]: Feb 13 20:11:58.650 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 20:11:58.653743 coreos-metadata[1948]: Feb 13 20:11:58.652 INFO Fetch successful Feb 13 20:11:58.653743 coreos-metadata[1948]: Feb 13 20:11:58.653 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 20:11:58.665229 coreos-metadata[1948]: Feb 13 20:11:58.664 INFO Fetch failed with 404: resource not found Feb 13 20:11:58.665229 coreos-metadata[1948]: Feb 13 20:11:58.664 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 20:11:58.671574 coreos-metadata[1948]: Feb 13 20:11:58.671 INFO Fetch successful Feb 13 20:11:58.671574 coreos-metadata[1948]: Feb 13 20:11:58.671 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 20:11:58.674888 coreos-metadata[1948]: Feb 13 20:11:58.673 INFO Fetch successful Feb 13 20:11:58.674888 coreos-metadata[1948]: Feb 13 20:11:58.674 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 20:11:58.680010 update_engine[1975]: I20250213 20:11:58.676543 1975 main.cc:92] Flatcar Update Engine starting Feb 13 20:11:58.703029 coreos-metadata[1948]: Feb 13 20:11:58.679 INFO Fetch successful Feb 13 20:11:58.703029 coreos-metadata[1948]: Feb 13 20:11:58.679 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 20:11:58.703029 coreos-metadata[1948]: Feb 13 20:11:58.691 INFO Fetch successful Feb 13 20:11:58.703029 coreos-metadata[1948]: Feb 13 20:11:58.691 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 20:11:58.703029 coreos-metadata[1948]: Feb 13 20:11:58.698 INFO Fetch successful Feb 13 20:11:58.701863 (ntainerd)[2000]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:11:58.690256 dbus-daemon[1950]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 20:11:58.710351 update_engine[1975]: I20250213 20:11:58.699324 1975 update_check_scheduler.cc:74] Next update check in 5m37s Feb 13 20:11:58.709209 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:11:58.712994 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:11:58.734243 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:11:58.734324 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:11:58.739651 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 20:11:58.770112 extend-filesystems[2002]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 20:11:58.770112 extend-filesystems[2002]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 20:11:58.770112 extend-filesystems[2002]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 20:11:58.775454 extend-filesystems[1952]: Resized filesystem in /dev/nvme0n1p9 Feb 13 20:11:58.780815 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 20:11:58.784114 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:11:58.784154 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:11:58.786522 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:11:58.791843 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:11:58.812108 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:11:58.814683 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:11:58.823842 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 20:11:58.885600 systemd-logind[1974]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 20:11:58.893831 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 20:11:58.895738 systemd-logind[1974]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 13 20:11:58.895797 systemd-logind[1974]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 20:11:58.902248 systemd-logind[1974]: New seat seat0. Feb 13 20:11:58.971349 bash[2063]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:11:59.044799 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:11:59.266422 sshd_keygen[1990]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:11:59.331253 locksmithd[2027]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:11:59.359653 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (2041) Feb 13 20:11:59.426180 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:11:59.435474 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:11:59.464017 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:11:59.480149 systemd[1]: Starting sshkeys.service... Feb 13 20:11:59.486830 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 20:11:59.511303 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 20:11:59.538202 dbus-daemon[1950]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 20:11:59.538382 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 20:11:59.546928 dbus-daemon[1950]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2025 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 20:11:59.562555 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:11:59.565120 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:11:59.577991 amazon-ssm-agent[2051]: Initializing new seelog logger Feb 13 20:11:59.580726 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 20:11:59.590902 amazon-ssm-agent[2051]: New Seelog Logger Creation Complete Feb 13 20:11:59.591219 amazon-ssm-agent[2051]: 2025/02/13 20:11:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 20:11:59.591283 amazon-ssm-agent[2051]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 20:11:59.591927 amazon-ssm-agent[2051]: 2025/02/13 20:11:59 processing appconfig overrides Feb 13 20:11:59.601076 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:11:59.603902 amazon-ssm-agent[2051]: 2025-02-13 20:11:59 INFO Proxy environment variables: Feb 13 20:11:59.603902 amazon-ssm-agent[2051]: 2025/02/13 20:11:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 20:11:59.603902 amazon-ssm-agent[2051]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 20:11:59.607741 amazon-ssm-agent[2051]: 2025/02/13 20:11:59 processing appconfig overrides Feb 13 20:11:59.629969 amazon-ssm-agent[2051]: 2025/02/13 20:11:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 20:11:59.629969 amazon-ssm-agent[2051]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 20:11:59.640325 amazon-ssm-agent[2051]: 2025/02/13 20:11:59 processing appconfig overrides Feb 13 20:11:59.644846 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 20:11:59.665322 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 20:11:59.688143 amazon-ssm-agent[2051]: 2025/02/13 20:11:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 20:11:59.688143 amazon-ssm-agent[2051]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 20:11:59.688143 amazon-ssm-agent[2051]: 2025/02/13 20:11:59 processing appconfig overrides Feb 13 20:11:59.706730 amazon-ssm-agent[2051]: 2025-02-13 20:11:59 INFO http_proxy: Feb 13 20:11:59.817448 amazon-ssm-agent[2051]: 2025-02-13 20:11:59 INFO no_proxy: Feb 13 20:11:59.825227 polkitd[2154]: Started polkitd version 121 Feb 13 20:11:59.849357 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:11:59.866127 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:11:59.884963 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 20:11:59.888951 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:11:59.894803 containerd[2000]: time="2025-02-13T20:11:59.889023284Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 20:11:59.906329 polkitd[2154]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 20:11:59.909014 amazon-ssm-agent[2051]: 2025-02-13 20:11:59 INFO https_proxy: Feb 13 20:11:59.909788 polkitd[2154]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 20:11:59.913447 polkitd[2154]: Finished loading, compiling and executing 2 rules Feb 13 20:11:59.914830 dbus-daemon[1950]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 20:11:59.915031 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 20:11:59.917925 polkitd[2154]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 20:11:59.976541 coreos-metadata[2164]: Feb 13 20:11:59.972 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 20:11:59.979901 coreos-metadata[2164]: Feb 13 20:11:59.979 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 20:11:59.980668 coreos-metadata[2164]: Feb 13 20:11:59.980 INFO Fetch successful Feb 13 20:11:59.980668 coreos-metadata[2164]: Feb 13 20:11:59.980 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 20:11:59.985404 systemd-hostnamed[2025]: Hostname set to <ip-172-31-17-230> (transient) Feb 13 20:11:59.998355 coreos-metadata[2164]: Feb 13 20:11:59.982 INFO Fetch successful Feb 13 20:11:59.985534 systemd-resolved[1892]: System hostname changed to 'ip-172-31-17-230'. Feb 13 20:11:59.986448 unknown[2164]: wrote ssh authorized keys file for user: core Feb 13 20:12:00.013714 amazon-ssm-agent[2051]: 2025-02-13 20:11:59 INFO Checking if agent identity type OnPrem can be assumed Feb 13 20:12:00.043121 containerd[2000]: time="2025-02-13T20:12:00.043043035Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:12:00.045497 containerd[2000]: time="2025-02-13T20:12:00.045438396Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:12:00.045659 containerd[2000]: time="2025-02-13T20:12:00.045621928Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:12:00.045763 containerd[2000]: time="2025-02-13T20:12:00.045747453Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:12:00.045983 containerd[2000]: time="2025-02-13T20:12:00.045967740Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:12:00.046066 containerd[2000]: time="2025-02-13T20:12:00.046052553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:12:00.046193 containerd[2000]: time="2025-02-13T20:12:00.046175366Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:12:00.046257 containerd[2000]: time="2025-02-13T20:12:00.046244558Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:12:00.046654 update-ssh-keys[2214]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:12:00.046933 containerd[2000]: time="2025-02-13T20:12:00.046595291Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:12:00.047571 containerd[2000]: time="2025-02-13T20:12:00.047026149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:12:00.047571 containerd[2000]: time="2025-02-13T20:12:00.047062422Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:12:00.047571 containerd[2000]: time="2025-02-13T20:12:00.047080075Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:12:00.047571 containerd[2000]: time="2025-02-13T20:12:00.047189668Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:12:00.047571 containerd[2000]: time="2025-02-13T20:12:00.047530909Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:12:00.048037 containerd[2000]: time="2025-02-13T20:12:00.048008456Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:12:00.048119 containerd[2000]: time="2025-02-13T20:12:00.048102657Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:12:00.048310 containerd[2000]: time="2025-02-13T20:12:00.048290111Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:12:00.048579 containerd[2000]: time="2025-02-13T20:12:00.048437926Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:12:00.050077 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 20:12:00.061487 containerd[2000]: time="2025-02-13T20:12:00.060038719Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:12:00.061487 containerd[2000]: time="2025-02-13T20:12:00.060106991Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:12:00.061487 containerd[2000]: time="2025-02-13T20:12:00.060129800Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:12:00.061487 containerd[2000]: time="2025-02-13T20:12:00.060150859Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:12:00.061487 containerd[2000]: time="2025-02-13T20:12:00.060171604Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:12:00.061487 containerd[2000]: time="2025-02-13T20:12:00.060353351Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:12:00.061487 containerd[2000]: time="2025-02-13T20:12:00.060857935Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:12:00.061487 containerd[2000]: time="2025-02-13T20:12:00.061014545Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:12:00.061487 containerd[2000]: time="2025-02-13T20:12:00.061039868Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:12:00.061487 containerd[2000]: time="2025-02-13T20:12:00.061061348Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:12:00.061487 containerd[2000]: time="2025-02-13T20:12:00.061082677Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:12:00.061487 containerd[2000]: time="2025-02-13T20:12:00.061106655Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:12:00.061487 containerd[2000]: time="2025-02-13T20:12:00.061126729Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:12:00.061487 containerd[2000]: time="2025-02-13T20:12:00.061149244Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:12:00.062054 containerd[2000]: time="2025-02-13T20:12:00.061171952Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:12:00.062054 containerd[2000]: time="2025-02-13T20:12:00.061192971Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:12:00.062054 containerd[2000]: time="2025-02-13T20:12:00.061211835Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:12:00.062054 containerd[2000]: time="2025-02-13T20:12:00.061233576Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:12:00.062054 containerd[2000]: time="2025-02-13T20:12:00.061262540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:12:00.062054 containerd[2000]: time="2025-02-13T20:12:00.061284609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:12:00.062054 containerd[2000]: time="2025-02-13T20:12:00.061303982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:12:00.062054 containerd[2000]: time="2025-02-13T20:12:00.061324726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:12:00.062054 containerd[2000]: time="2025-02-13T20:12:00.061343945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:12:00.062054 containerd[2000]: time="2025-02-13T20:12:00.061364834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:12:00.062054 containerd[2000]: time="2025-02-13T20:12:00.061385866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:12:00.062054 containerd[2000]: time="2025-02-13T20:12:00.061407807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:12:00.062054 containerd[2000]: time="2025-02-13T20:12:00.061427759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:12:00.062054 containerd[2000]: time="2025-02-13T20:12:00.061450363Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:12:00.064327 containerd[2000]: time="2025-02-13T20:12:00.063692159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:12:00.064327 containerd[2000]: time="2025-02-13T20:12:00.063733257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:12:00.064327 containerd[2000]: time="2025-02-13T20:12:00.063757165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:12:00.064327 containerd[2000]: time="2025-02-13T20:12:00.063782468Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:12:00.064327 containerd[2000]: time="2025-02-13T20:12:00.063815956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:12:00.064327 containerd[2000]: time="2025-02-13T20:12:00.063833844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:12:00.064327 containerd[2000]: time="2025-02-13T20:12:00.063851823Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:12:00.064327 containerd[2000]: time="2025-02-13T20:12:00.063906053Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:12:00.064327 containerd[2000]: time="2025-02-13T20:12:00.063929023Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:12:00.064327 containerd[2000]: time="2025-02-13T20:12:00.063946949Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:12:00.064327 containerd[2000]: time="2025-02-13T20:12:00.063966670Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:12:00.064327 containerd[2000]: time="2025-02-13T20:12:00.063982959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:12:00.064327 containerd[2000]: time="2025-02-13T20:12:00.064003054Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:12:00.064327 containerd[2000]: time="2025-02-13T20:12:00.064018634Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:12:00.064900 containerd[2000]: time="2025-02-13T20:12:00.064037830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:12:00.064941 systemd[1]: Finished sshkeys.service. Feb 13 20:12:00.067676 containerd[2000]: time="2025-02-13T20:12:00.067277016Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:12:00.070138 containerd[2000]: time="2025-02-13T20:12:00.069699255Z" level=info msg="Connect containerd service" Feb 13 20:12:00.070138 containerd[2000]: time="2025-02-13T20:12:00.069808252Z" level=info msg="using legacy CRI server" Feb 13 20:12:00.070138 containerd[2000]: time="2025-02-13T20:12:00.069823592Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:12:00.070138 containerd[2000]: time="2025-02-13T20:12:00.069991903Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:12:00.070750 containerd[2000]: time="2025-02-13T20:12:00.070716391Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:12:00.071146 containerd[2000]: time="2025-02-13T20:12:00.071124798Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:12:00.071204 containerd[2000]: time="2025-02-13T20:12:00.071186405Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:12:00.071420 containerd[2000]: time="2025-02-13T20:12:00.071380657Z" level=info msg="Start subscribing containerd event" Feb 13 20:12:00.071477 containerd[2000]: time="2025-02-13T20:12:00.071444653Z" level=info msg="Start recovering state" Feb 13 20:12:00.071543 containerd[2000]: time="2025-02-13T20:12:00.071528321Z" level=info msg="Start event monitor" Feb 13 20:12:00.071583 containerd[2000]: time="2025-02-13T20:12:00.071553708Z" level=info msg="Start snapshots syncer" Feb 13 20:12:00.071583 containerd[2000]: time="2025-02-13T20:12:00.071567562Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:12:00.071583 containerd[2000]: time="2025-02-13T20:12:00.071578463Z" level=info msg="Start streaming server" Feb 13 20:12:00.071834 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:12:00.073024 containerd[2000]: time="2025-02-13T20:12:00.071961429Z" level=info msg="containerd successfully booted in 0.184893s" Feb 13 20:12:00.110231 amazon-ssm-agent[2051]: 2025-02-13 20:11:59 INFO Checking if agent identity type EC2 can be assumed Feb 13 20:12:00.209299 amazon-ssm-agent[2051]: 2025-02-13 20:12:00 INFO Agent will take identity from EC2 Feb 13 20:12:00.308318 amazon-ssm-agent[2051]: 2025-02-13 20:12:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 20:12:00.396290 amazon-ssm-agent[2051]: 2025-02-13 20:12:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 20:12:00.396290 amazon-ssm-agent[2051]: 2025-02-13 20:12:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 20:12:00.396290 amazon-ssm-agent[2051]: 2025-02-13 20:12:00 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 20:12:00.396290 amazon-ssm-agent[2051]: 2025-02-13 20:12:00 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Feb 13 20:12:00.396290 amazon-ssm-agent[2051]: 2025-02-13 20:12:00 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 20:12:00.396290 amazon-ssm-agent[2051]: 2025-02-13 20:12:00 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 20:12:00.396290 amazon-ssm-agent[2051]: 2025-02-13 20:12:00 INFO [Registrar] Starting registrar module Feb 13 20:12:00.396290 amazon-ssm-agent[2051]: 2025-02-13 20:12:00 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 20:12:00.396290 amazon-ssm-agent[2051]: 2025-02-13 20:12:00 INFO [EC2Identity] EC2 registration was successful. Feb 13 20:12:00.396290 amazon-ssm-agent[2051]: 2025-02-13 20:12:00 INFO [CredentialRefresher] credentialRefresher has started Feb 13 20:12:00.396290 amazon-ssm-agent[2051]: 2025-02-13 20:12:00 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 20:12:00.396290 amazon-ssm-agent[2051]: 2025-02-13 20:12:00 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 20:12:00.408176 amazon-ssm-agent[2051]: 2025-02-13 20:12:00 INFO [CredentialRefresher] Next credential rotation will be in 31.25832669291667 minutes Feb 13 20:12:01.444598 amazon-ssm-agent[2051]: 2025-02-13 20:12:01 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 20:12:01.546086 amazon-ssm-agent[2051]: 2025-02-13 20:12:01 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2226) started Feb 13 20:12:01.654167 amazon-ssm-agent[2051]: 2025-02-13 20:12:01 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 20:12:01.862957 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:12:01.877129 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:12:01.886342 systemd[1]: Startup finished in 8.228s (kernel) + 9.803s (userspace) = 18.031s. Feb 13 20:12:01.911341 (kubelet)[2240]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:12:03.670535 kubelet[2240]: E0213 20:12:03.670431 2240 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:12:03.673514 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:12:03.678673 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:12:06.156783 systemd-resolved[1892]: Clock change detected. Flushing caches. Feb 13 20:12:07.154872 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:12:07.167653 systemd[1]: Started sshd@0-172.31.17.230:22-139.178.89.65:49538.service - OpenSSH per-connection server daemon (139.178.89.65:49538). Feb 13 20:12:07.364407 sshd[2257]: Accepted publickey for core from 139.178.89.65 port 49538 ssh2: RSA SHA256:7nv7xaFFWmIAvPewvKjLuTxkMrDcPy3WtQ5BDo3Wg0I Feb 13 20:12:07.366503 sshd[2257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:12:07.382519 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:12:07.388723 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:12:07.392064 systemd-logind[1974]: New session 1 of user core. Feb 13 20:12:07.416497 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:12:07.425728 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:12:07.433948 (systemd)[2263]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:12:07.586010 systemd[2263]: Queued start job for default target default.target. Feb 13 20:12:07.586639 systemd[2263]: Created slice app.slice - User Application Slice. Feb 13 20:12:07.586670 systemd[2263]: Reached target paths.target - Paths. Feb 13 20:12:07.586689 systemd[2263]: Reached target timers.target - Timers. Feb 13 20:12:07.592186 systemd[2263]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:12:07.604354 systemd[2263]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:12:07.604456 systemd[2263]: Reached target sockets.target - Sockets. Feb 13 20:12:07.604479 systemd[2263]: Reached target basic.target - Basic System. Feb 13 20:12:07.605560 systemd[2263]: Reached target default.target - Main User Target. Feb 13 20:12:07.605763 systemd[2263]: Startup finished in 164ms. Feb 13 20:12:07.606537 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:12:07.612419 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:12:07.760773 systemd[1]: Started sshd@1-172.31.17.230:22-139.178.89.65:49544.service - OpenSSH per-connection server daemon (139.178.89.65:49544). Feb 13 20:12:07.925427 sshd[2275]: Accepted publickey for core from 139.178.89.65 port 49544 ssh2: RSA SHA256:7nv7xaFFWmIAvPewvKjLuTxkMrDcPy3WtQ5BDo3Wg0I Feb 13 20:12:07.927134 sshd[2275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:12:07.933528 systemd-logind[1974]: New session 2 of user core. Feb 13 20:12:07.939738 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:12:08.064036 sshd[2275]: pam_unix(sshd:session): session closed for user core Feb 13 20:12:08.068512 systemd[1]: sshd@1-172.31.17.230:22-139.178.89.65:49544.service: Deactivated successfully. Feb 13 20:12:08.073452 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 20:12:08.074600 systemd-logind[1974]: Session 2 logged out. Waiting for processes to exit. Feb 13 20:12:08.075643 systemd-logind[1974]: Removed session 2. Feb 13 20:12:08.092735 systemd[1]: Started sshd@2-172.31.17.230:22-139.178.89.65:49554.service - OpenSSH per-connection server daemon (139.178.89.65:49554). Feb 13 20:12:08.254856 sshd[2283]: Accepted publickey for core from 139.178.89.65 port 49554 ssh2: RSA SHA256:7nv7xaFFWmIAvPewvKjLuTxkMrDcPy3WtQ5BDo3Wg0I Feb 13 20:12:08.257500 sshd[2283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:12:08.264126 systemd-logind[1974]: New session 3 of user core. Feb 13 20:12:08.271654 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:12:08.393990 sshd[2283]: pam_unix(sshd:session): session closed for user core Feb 13 20:12:08.407984 systemd[1]: sshd@2-172.31.17.230:22-139.178.89.65:49554.service: Deactivated successfully. Feb 13 20:12:08.417179 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 20:12:08.421270 systemd-logind[1974]: Session 3 logged out. Waiting for processes to exit. Feb 13 20:12:08.442767 systemd[1]: Started sshd@3-172.31.17.230:22-139.178.89.65:49558.service - OpenSSH per-connection server daemon (139.178.89.65:49558). Feb 13 20:12:08.444995 systemd-logind[1974]: Removed session 3. Feb 13 20:12:08.641437 sshd[2291]: Accepted publickey for core from 139.178.89.65 port 49558 ssh2: RSA SHA256:7nv7xaFFWmIAvPewvKjLuTxkMrDcPy3WtQ5BDo3Wg0I Feb 13 20:12:08.643683 sshd[2291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:12:08.655240 systemd-logind[1974]: New session 4 of user core. Feb 13 20:12:08.664238 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:12:08.804511 sshd[2291]: pam_unix(sshd:session): session closed for user core Feb 13 20:12:08.814290 systemd[1]: sshd@3-172.31.17.230:22-139.178.89.65:49558.service: Deactivated successfully. Feb 13 20:12:08.827959 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 20:12:08.829385 systemd-logind[1974]: Session 4 logged out. Waiting for processes to exit. Feb 13 20:12:08.841166 systemd[1]: Started sshd@4-172.31.17.230:22-139.178.89.65:49572.service - OpenSSH per-connection server daemon (139.178.89.65:49572). Feb 13 20:12:08.842721 systemd-logind[1974]: Removed session 4. Feb 13 20:12:09.018812 sshd[2299]: Accepted publickey for core from 139.178.89.65 port 49572 ssh2: RSA SHA256:7nv7xaFFWmIAvPewvKjLuTxkMrDcPy3WtQ5BDo3Wg0I Feb 13 20:12:09.021906 sshd[2299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:12:09.033174 systemd-logind[1974]: New session 5 of user core. Feb 13 20:12:09.043583 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:12:09.173247 sudo[2303]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 20:12:09.173769 sudo[2303]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:12:09.212953 sudo[2303]: pam_unix(sudo:session): session closed for user root Feb 13 20:12:09.244105 sshd[2299]: pam_unix(sshd:session): session closed for user core Feb 13 20:12:09.264827 systemd[1]: sshd@4-172.31.17.230:22-139.178.89.65:49572.service: Deactivated successfully. Feb 13 20:12:09.289166 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 20:12:09.290723 systemd-logind[1974]: Session 5 logged out. Waiting for processes to exit. Feb 13 20:12:09.298812 systemd[1]: Started sshd@5-172.31.17.230:22-139.178.89.65:49586.service - OpenSSH per-connection server daemon (139.178.89.65:49586). Feb 13 20:12:09.301587 systemd-logind[1974]: Removed session 5. Feb 13 20:12:09.478337 sshd[2308]: Accepted publickey for core from 139.178.89.65 port 49586 ssh2: RSA SHA256:7nv7xaFFWmIAvPewvKjLuTxkMrDcPy3WtQ5BDo3Wg0I Feb 13 20:12:09.480007 sshd[2308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:12:09.485889 systemd-logind[1974]: New session 6 of user core. Feb 13 20:12:09.492846 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:12:09.597886 sudo[2313]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 20:12:09.598649 sudo[2313]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:12:09.602926 sudo[2313]: pam_unix(sudo:session): session closed for user root Feb 13 20:12:09.610312 sudo[2312]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 20:12:09.610760 sudo[2312]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:12:09.636343 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 20:12:09.660866 auditctl[2316]: No rules Feb 13 20:12:09.661408 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 20:12:09.661847 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 20:12:09.688131 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:12:09.738019 augenrules[2335]: No rules Feb 13 20:12:09.742584 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:12:09.746817 sudo[2312]: pam_unix(sudo:session): session closed for user root Feb 13 20:12:09.770128 sshd[2308]: pam_unix(sshd:session): session closed for user core Feb 13 20:12:09.775275 systemd[1]: sshd@5-172.31.17.230:22-139.178.89.65:49586.service: Deactivated successfully. Feb 13 20:12:09.783089 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:12:09.783977 systemd-logind[1974]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:12:09.785288 systemd-logind[1974]: Removed session 6. Feb 13 20:12:09.798596 systemd[1]: Started sshd@6-172.31.17.230:22-139.178.89.65:49588.service - OpenSSH per-connection server daemon (139.178.89.65:49588). Feb 13 20:12:09.980037 sshd[2344]: Accepted publickey for core from 139.178.89.65 port 49588 ssh2: RSA SHA256:7nv7xaFFWmIAvPewvKjLuTxkMrDcPy3WtQ5BDo3Wg0I Feb 13 20:12:09.981306 sshd[2344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:12:09.992454 systemd-logind[1974]: New session 7 of user core. Feb 13 20:12:09.999535 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:12:10.106696 sudo[2348]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:12:10.110082 sudo[2348]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:12:11.755990 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:12:11.771512 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:12:11.818519 systemd[1]: Reloading requested from client PID 2387 ('systemctl') (unit session-7.scope)... Feb 13 20:12:11.818538 systemd[1]: Reloading... Feb 13 20:12:12.037185 zram_generator::config[2427]: No configuration found. Feb 13 20:12:12.282782 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:12:12.447005 systemd[1]: Reloading finished in 627 ms. Feb 13 20:12:12.560424 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:12:12.564852 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:12:12.574208 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:12:12.574546 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:12:12.585460 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:12:12.832324 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:12:12.852664 (kubelet)[2500]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:12:12.924307 kubelet[2500]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:12:12.924307 kubelet[2500]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:12:12.924307 kubelet[2500]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:12:12.924798 kubelet[2500]: I0213 20:12:12.924417 2500 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:12:13.355044 kubelet[2500]: I0213 20:12:13.354995 2500 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 20:12:13.355044 kubelet[2500]: I0213 20:12:13.355034 2500 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:12:13.355339 kubelet[2500]: I0213 20:12:13.355317 2500 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 20:12:13.392480 kubelet[2500]: I0213 20:12:13.392095 2500 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:12:13.413540 kubelet[2500]: I0213 20:12:13.413499 2500 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:12:13.414116 kubelet[2500]: I0213 20:12:13.414072 2500 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:12:13.414547 kubelet[2500]: I0213 20:12:13.414113 2500 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.17.230","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 20:12:13.415306 kubelet[2500]: I0213 20:12:13.415284 2500 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:12:13.415370 kubelet[2500]: I0213 20:12:13.415312 2500 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 20:12:13.415477 kubelet[2500]: I0213 20:12:13.415461 2500 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:12:13.416609 kubelet[2500]: I0213 20:12:13.416588 2500 kubelet.go:400] "Attempting to sync node with API server" Feb 13 20:12:13.416685 kubelet[2500]: I0213 20:12:13.416613 2500 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:12:13.416685 kubelet[2500]: I0213 20:12:13.416642 2500 kubelet.go:312] "Adding apiserver pod source" Feb 13 20:12:13.416685 kubelet[2500]: I0213 20:12:13.416662 2500 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:12:13.417316 kubelet[2500]: E0213 20:12:13.417274 2500 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:13.419999 kubelet[2500]: E0213 20:12:13.419964 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:13.424974 kubelet[2500]: I0213 20:12:13.424863 2500 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:12:13.427080 kubelet[2500]: I0213 20:12:13.427025 2500 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:12:13.427188 kubelet[2500]: W0213 20:12:13.427132 2500 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:12:13.428121 kubelet[2500]: I0213 20:12:13.427848 2500 server.go:1264] "Started kubelet" Feb 13 20:12:13.429853 kubelet[2500]: I0213 20:12:13.429831 2500 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:12:13.439166 kubelet[2500]: W0213 20:12:13.436828 2500 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.17.230" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 20:12:13.439166 kubelet[2500]: E0213 20:12:13.436872 2500 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.17.230" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 20:12:13.439166 kubelet[2500]: W0213 20:12:13.436949 2500 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 20:12:13.439166 kubelet[2500]: E0213 20:12:13.436961 2500 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 20:12:13.442775 kubelet[2500]: E0213 20:12:13.442640 2500 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.17.230.1823dd99babcccc8 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.17.230,UID:172.31.17.230,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.17.230,},FirstTimestamp:2025-02-13 20:12:13.427813576 +0000 UTC m=+0.571140979,LastTimestamp:2025-02-13 20:12:13.427813576 +0000 UTC m=+0.571140979,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.17.230,}" Feb 13 20:12:13.443138 kubelet[2500]: I0213 20:12:13.442800 2500 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:12:13.444520 kubelet[2500]: I0213 20:12:13.444496 2500 server.go:455] "Adding debug handlers to kubelet server" Feb 13 20:12:13.445822 kubelet[2500]: I0213 20:12:13.445767 2500 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:12:13.446412 kubelet[2500]: I0213 20:12:13.446380 2500 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:12:13.450414 kubelet[2500]: I0213 20:12:13.449671 2500 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 20:12:13.450414 kubelet[2500]: I0213 20:12:13.449790 2500 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:12:13.450414 kubelet[2500]: I0213 20:12:13.449851 2500 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:12:13.454519 kubelet[2500]: I0213 20:12:13.453237 2500 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:12:13.454519 kubelet[2500]: I0213 20:12:13.453364 2500 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:12:13.454991 kubelet[2500]: E0213 20:12:13.454973 2500 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:12:13.456815 kubelet[2500]: I0213 20:12:13.456780 2500 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:12:13.462137 kubelet[2500]: E0213 20:12:13.460724 2500 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.17.230\" not found" node="172.31.17.230" Feb 13 20:12:13.508996 kubelet[2500]: I0213 20:12:13.508973 2500 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:12:13.509539 kubelet[2500]: I0213 20:12:13.509515 2500 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:12:13.509628 kubelet[2500]: I0213 20:12:13.509619 2500 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:12:13.515134 kubelet[2500]: I0213 20:12:13.515112 2500 policy_none.go:49] "None policy: Start" Feb 13 20:12:13.517186 kubelet[2500]: I0213 20:12:13.517164 2500 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:12:13.517337 kubelet[2500]: I0213 20:12:13.517327 2500 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:12:13.522850 kubelet[2500]: I0213 20:12:13.522823 2500 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:12:13.523290 kubelet[2500]: I0213 20:12:13.523245 2500 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:12:13.523612 kubelet[2500]: I0213 20:12:13.523599 2500 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:12:13.544236 kubelet[2500]: E0213 20:12:13.544205 2500 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.17.230\" not found" Feb 13 20:12:13.551984 kubelet[2500]: I0213 20:12:13.551959 2500 kubelet_node_status.go:73] "Attempting to register node" node="172.31.17.230" Feb 13 20:12:13.564090 kubelet[2500]: I0213 20:12:13.564026 2500 kubelet_node_status.go:76] "Successfully registered node" node="172.31.17.230" Feb 13 20:12:13.578770 kubelet[2500]: I0213 20:12:13.578725 2500 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:12:13.580920 kubelet[2500]: I0213 20:12:13.580874 2500 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:12:13.580920 kubelet[2500]: I0213 20:12:13.580912 2500 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:12:13.581110 kubelet[2500]: I0213 20:12:13.580939 2500 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 20:12:13.581110 kubelet[2500]: E0213 20:12:13.580985 2500 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 13 20:12:13.610914 kubelet[2500]: E0213 20:12:13.610781 2500 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.230\" not found" Feb 13 20:12:13.710954 kubelet[2500]: E0213 20:12:13.710913 2500 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.230\" not found" Feb 13 20:12:13.811427 kubelet[2500]: E0213 20:12:13.811391 2500 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.230\" not found" Feb 13 20:12:13.858256 sudo[2348]: pam_unix(sudo:session): session closed for user root Feb 13 20:12:13.881568 sshd[2344]: pam_unix(sshd:session): session closed for user core Feb 13 20:12:13.887086 systemd[1]: sshd@6-172.31.17.230:22-139.178.89.65:49588.service: Deactivated successfully. Feb 13 20:12:13.901963 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:12:13.906263 systemd-logind[1974]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:12:13.911860 kubelet[2500]: E0213 20:12:13.911669 2500 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.230\" not found" Feb 13 20:12:13.914846 systemd-logind[1974]: Removed session 7. Feb 13 20:12:14.013576 kubelet[2500]: E0213 20:12:14.013521 2500 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.230\" not found" Feb 13 20:12:14.114371 kubelet[2500]: E0213 20:12:14.114308 2500 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.230\" not found" Feb 13 20:12:14.215381 kubelet[2500]: E0213 20:12:14.215251 2500 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.230\" not found" Feb 13 20:12:14.316190 kubelet[2500]: E0213 20:12:14.316140 2500 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.230\" not found" Feb 13 20:12:14.369982 kubelet[2500]: I0213 20:12:14.369937 2500 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 20:12:14.370212 kubelet[2500]: W0213 20:12:14.370194 2500 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 20:12:14.370456 kubelet[2500]: W0213 20:12:14.370242 2500 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 20:12:14.416294 kubelet[2500]: E0213 20:12:14.416240 2500 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.230\" not found" Feb 13 20:12:14.420565 kubelet[2500]: E0213 20:12:14.420515 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:14.517267 kubelet[2500]: E0213 20:12:14.516439 2500 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.230\" not found" Feb 13 20:12:14.616775 kubelet[2500]: E0213 20:12:14.616726 2500 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.230\" not found" Feb 13 20:12:14.717398 kubelet[2500]: E0213 20:12:14.717341 2500 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.230\" not found" Feb 13 20:12:14.818196 kubelet[2500]: E0213 20:12:14.818011 2500 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.230\" not found" Feb 13 20:12:14.918741 kubelet[2500]: E0213 20:12:14.918693 2500 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.230\" not found" Feb 13 20:12:15.019701 kubelet[2500]: E0213 20:12:15.019646 2500 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.230\" not found" Feb 13 20:12:15.120864 kubelet[2500]: I0213 20:12:15.120829 2500 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 20:12:15.121330 containerd[2000]: time="2025-02-13T20:12:15.121288950Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:12:15.122002 kubelet[2500]: I0213 20:12:15.121507 2500 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 20:12:15.421214 kubelet[2500]: I0213 20:12:15.420908 2500 apiserver.go:52] "Watching apiserver" Feb 13 20:12:15.421214 kubelet[2500]: E0213 20:12:15.420930 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:15.431084 kubelet[2500]: I0213 20:12:15.429091 2500 topology_manager.go:215] "Topology Admit Handler" podUID="719c03e5-a3e9-48d1-819e-8eff8acb5c54" podNamespace="kube-system" podName="cilium-g2c4s" Feb 13 20:12:15.431084 kubelet[2500]: I0213 20:12:15.429254 2500 topology_manager.go:215] "Topology Admit Handler" podUID="b9701bb3-5dde-410b-8b85-067b79614540" podNamespace="kube-system" podName="kube-proxy-vf52t" Feb 13 20:12:15.451233 kubelet[2500]: I0213 20:12:15.451191 2500 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:12:15.462881 kubelet[2500]: I0213 20:12:15.462773 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/719c03e5-a3e9-48d1-819e-8eff8acb5c54-clustermesh-secrets\") pod \"cilium-g2c4s\" (UID: \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\") " pod="kube-system/cilium-g2c4s" Feb 13 20:12:15.462881 kubelet[2500]: I0213 20:12:15.462819 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/719c03e5-a3e9-48d1-819e-8eff8acb5c54-host-proc-sys-kernel\") pod \"cilium-g2c4s\" (UID: \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\") " pod="kube-system/cilium-g2c4s" Feb 13 20:12:15.462881 kubelet[2500]: I0213 20:12:15.462847 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/719c03e5-a3e9-48d1-819e-8eff8acb5c54-etc-cni-netd\") pod \"cilium-g2c4s\" (UID: \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\") " pod="kube-system/cilium-g2c4s" Feb 13 20:12:15.462881 kubelet[2500]: I0213 20:12:15.462869 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/719c03e5-a3e9-48d1-819e-8eff8acb5c54-hostproc\") pod \"cilium-g2c4s\" (UID: \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\") " pod="kube-system/cilium-g2c4s" Feb 13 20:12:15.462881 kubelet[2500]: I0213 20:12:15.462891 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/719c03e5-a3e9-48d1-819e-8eff8acb5c54-lib-modules\") pod \"cilium-g2c4s\" (UID: \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\") " pod="kube-system/cilium-g2c4s" Feb 13 20:12:15.463220 kubelet[2500]: I0213 20:12:15.462911 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/719c03e5-a3e9-48d1-819e-8eff8acb5c54-hubble-tls\") pod \"cilium-g2c4s\" (UID: \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\") " pod="kube-system/cilium-g2c4s" Feb 13 20:12:15.463220 kubelet[2500]: I0213 20:12:15.462939 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b9701bb3-5dde-410b-8b85-067b79614540-kube-proxy\") pod \"kube-proxy-vf52t\" (UID: \"b9701bb3-5dde-410b-8b85-067b79614540\") " pod="kube-system/kube-proxy-vf52t" Feb 13 20:12:15.463220 kubelet[2500]: I0213 20:12:15.462959 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/719c03e5-a3e9-48d1-819e-8eff8acb5c54-cilium-run\") pod \"cilium-g2c4s\" (UID: \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\") " pod="kube-system/cilium-g2c4s" Feb 13 20:12:15.463220 kubelet[2500]: I0213 20:12:15.462980 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/719c03e5-a3e9-48d1-819e-8eff8acb5c54-xtables-lock\") pod \"cilium-g2c4s\" (UID: \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\") " pod="kube-system/cilium-g2c4s" Feb 13 20:12:15.463220 kubelet[2500]: I0213 20:12:15.463002 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/719c03e5-a3e9-48d1-819e-8eff8acb5c54-cilium-config-path\") pod \"cilium-g2c4s\" (UID: \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\") " pod="kube-system/cilium-g2c4s" Feb 13 20:12:15.463220 kubelet[2500]: I0213 20:12:15.463022 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/719c03e5-a3e9-48d1-819e-8eff8acb5c54-host-proc-sys-net\") pod \"cilium-g2c4s\" (UID: \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\") " pod="kube-system/cilium-g2c4s" Feb 13 20:12:15.463563 kubelet[2500]: I0213 20:12:15.463045 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9701bb3-5dde-410b-8b85-067b79614540-xtables-lock\") pod \"kube-proxy-vf52t\" (UID: \"b9701bb3-5dde-410b-8b85-067b79614540\") " pod="kube-system/kube-proxy-vf52t" Feb 13 20:12:15.463563 kubelet[2500]: I0213 20:12:15.463083 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhtrs\" (UniqueName: \"kubernetes.io/projected/b9701bb3-5dde-410b-8b85-067b79614540-kube-api-access-dhtrs\") pod \"kube-proxy-vf52t\" (UID: \"b9701bb3-5dde-410b-8b85-067b79614540\") " pod="kube-system/kube-proxy-vf52t" Feb 13 20:12:15.463563 kubelet[2500]: I0213 20:12:15.463104 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/719c03e5-a3e9-48d1-819e-8eff8acb5c54-bpf-maps\") pod \"cilium-g2c4s\" (UID: \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\") " pod="kube-system/cilium-g2c4s" Feb 13 20:12:15.463563 kubelet[2500]: I0213 20:12:15.463127 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/719c03e5-a3e9-48d1-819e-8eff8acb5c54-cni-path\") pod \"cilium-g2c4s\" (UID: \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\") " pod="kube-system/cilium-g2c4s" Feb 13 20:12:15.463563 kubelet[2500]: I0213 20:12:15.463150 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2bcb\" (UniqueName: \"kubernetes.io/projected/719c03e5-a3e9-48d1-819e-8eff8acb5c54-kube-api-access-l2bcb\") pod \"cilium-g2c4s\" (UID: \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\") " pod="kube-system/cilium-g2c4s" Feb 13 20:12:15.463685 kubelet[2500]: I0213 20:12:15.463175 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9701bb3-5dde-410b-8b85-067b79614540-lib-modules\") pod \"kube-proxy-vf52t\" (UID: \"b9701bb3-5dde-410b-8b85-067b79614540\") " pod="kube-system/kube-proxy-vf52t" Feb 13 20:12:15.463685 kubelet[2500]: I0213 20:12:15.463198 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/719c03e5-a3e9-48d1-819e-8eff8acb5c54-cilium-cgroup\") pod \"cilium-g2c4s\" (UID: \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\") " pod="kube-system/cilium-g2c4s" Feb 13 20:12:15.737146 containerd[2000]: time="2025-02-13T20:12:15.737006741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g2c4s,Uid:719c03e5-a3e9-48d1-819e-8eff8acb5c54,Namespace:kube-system,Attempt:0,}" Feb 13 20:12:15.738298 containerd[2000]: time="2025-02-13T20:12:15.737020838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vf52t,Uid:b9701bb3-5dde-410b-8b85-067b79614540,Namespace:kube-system,Attempt:0,}" Feb 13 20:12:16.422134 kubelet[2500]: E0213 20:12:16.422071 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:16.517615 containerd[2000]: time="2025-02-13T20:12:16.517558581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:12:16.526086 containerd[2000]: time="2025-02-13T20:12:16.525997228Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:12:16.528719 containerd[2000]: time="2025-02-13T20:12:16.527209154Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:12:16.532073 containerd[2000]: time="2025-02-13T20:12:16.530478459Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:12:16.532073 containerd[2000]: time="2025-02-13T20:12:16.530687466Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 20:12:16.537558 containerd[2000]: time="2025-02-13T20:12:16.537504280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:12:16.542493 containerd[2000]: time="2025-02-13T20:12:16.542435111Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 804.371669ms" Feb 13 20:12:16.543843 containerd[2000]: time="2025-02-13T20:12:16.543795862Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 806.643388ms" Feb 13 20:12:16.582594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2165019995.mount: Deactivated successfully. Feb 13 20:12:16.743065 containerd[2000]: time="2025-02-13T20:12:16.742862351Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:12:16.743546 containerd[2000]: time="2025-02-13T20:12:16.742944364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:12:16.743546 containerd[2000]: time="2025-02-13T20:12:16.743230313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:12:16.743546 containerd[2000]: time="2025-02-13T20:12:16.743356238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:12:16.755130 containerd[2000]: time="2025-02-13T20:12:16.752545033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:12:16.755130 containerd[2000]: time="2025-02-13T20:12:16.752614439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:12:16.755130 containerd[2000]: time="2025-02-13T20:12:16.752648726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:12:16.756643 containerd[2000]: time="2025-02-13T20:12:16.756235154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:12:16.950525 containerd[2000]: time="2025-02-13T20:12:16.950429545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g2c4s,Uid:719c03e5-a3e9-48d1-819e-8eff8acb5c54,Namespace:kube-system,Attempt:0,} returns sandbox id \"292e9d75cca52811b8290104445103db01a0a055923bc48725a37ce4e912f64b\"" Feb 13 20:12:16.956730 containerd[2000]: time="2025-02-13T20:12:16.956550824Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 20:12:16.964748 containerd[2000]: time="2025-02-13T20:12:16.964592990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vf52t,Uid:b9701bb3-5dde-410b-8b85-067b79614540,Namespace:kube-system,Attempt:0,} returns sandbox id \"aba2709e11a1b717b606b743c5b62710de1dbb202b923adaff4a39535a2fe588\"" Feb 13 20:12:17.422889 kubelet[2500]: E0213 20:12:17.422846 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:18.423421 kubelet[2500]: E0213 20:12:18.423376 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:19.424551 kubelet[2500]: E0213 20:12:19.424511 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:20.424809 kubelet[2500]: E0213 20:12:20.424716 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:21.425823 kubelet[2500]: E0213 20:12:21.425651 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:22.425995 kubelet[2500]: E0213 20:12:22.425954 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:22.797622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount336684212.mount: Deactivated successfully. Feb 13 20:12:23.427335 kubelet[2500]: E0213 20:12:23.427265 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:24.428150 kubelet[2500]: E0213 20:12:24.427573 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:25.428224 kubelet[2500]: E0213 20:12:25.428185 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:26.224833 containerd[2000]: time="2025-02-13T20:12:26.224789024Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:12:26.226221 containerd[2000]: time="2025-02-13T20:12:26.226086275Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 13 20:12:26.230070 containerd[2000]: time="2025-02-13T20:12:26.228416300Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:12:26.236154 containerd[2000]: time="2025-02-13T20:12:26.236098576Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.279354329s" Feb 13 20:12:26.236154 containerd[2000]: time="2025-02-13T20:12:26.236156943Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 20:12:26.240922 containerd[2000]: time="2025-02-13T20:12:26.240886040Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 20:12:26.241109 containerd[2000]: time="2025-02-13T20:12:26.241083738Z" level=info msg="CreateContainer within sandbox \"292e9d75cca52811b8290104445103db01a0a055923bc48725a37ce4e912f64b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 20:12:26.268041 containerd[2000]: time="2025-02-13T20:12:26.267992849Z" level=info msg="CreateContainer within sandbox \"292e9d75cca52811b8290104445103db01a0a055923bc48725a37ce4e912f64b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f3d125c5c91cda9d100c9db5075fedae01e2dddb341b33c542b5a45ce9d7e03d\"" Feb 13 20:12:26.269235 containerd[2000]: time="2025-02-13T20:12:26.269197464Z" level=info msg="StartContainer for \"f3d125c5c91cda9d100c9db5075fedae01e2dddb341b33c542b5a45ce9d7e03d\"" Feb 13 20:12:26.383610 containerd[2000]: time="2025-02-13T20:12:26.383562968Z" level=info msg="StartContainer for \"f3d125c5c91cda9d100c9db5075fedae01e2dddb341b33c542b5a45ce9d7e03d\" returns successfully" Feb 13 20:12:26.428818 kubelet[2500]: E0213 20:12:26.428738 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:26.598892 containerd[2000]: time="2025-02-13T20:12:26.598827015Z" level=info msg="shim disconnected" id=f3d125c5c91cda9d100c9db5075fedae01e2dddb341b33c542b5a45ce9d7e03d namespace=k8s.io Feb 13 20:12:26.598892 containerd[2000]: time="2025-02-13T20:12:26.598882816Z" level=warning msg="cleaning up after shim disconnected" id=f3d125c5c91cda9d100c9db5075fedae01e2dddb341b33c542b5a45ce9d7e03d namespace=k8s.io Feb 13 20:12:26.598892 containerd[2000]: time="2025-02-13T20:12:26.598894816Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:12:26.723926 containerd[2000]: time="2025-02-13T20:12:26.722908451Z" level=info msg="CreateContainer within sandbox \"292e9d75cca52811b8290104445103db01a0a055923bc48725a37ce4e912f64b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 20:12:26.808389 containerd[2000]: time="2025-02-13T20:12:26.808342572Z" level=info msg="CreateContainer within sandbox \"292e9d75cca52811b8290104445103db01a0a055923bc48725a37ce4e912f64b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"61693737953f35ea31ca59706cc6f3e8693b7b4bc0c8097a0055c1266c01d549\"" Feb 13 20:12:26.809286 containerd[2000]: time="2025-02-13T20:12:26.809245073Z" level=info msg="StartContainer for \"61693737953f35ea31ca59706cc6f3e8693b7b4bc0c8097a0055c1266c01d549\"" Feb 13 20:12:26.995235 containerd[2000]: time="2025-02-13T20:12:26.987901283Z" level=info msg="StartContainer for \"61693737953f35ea31ca59706cc6f3e8693b7b4bc0c8097a0055c1266c01d549\" returns successfully" Feb 13 20:12:27.015038 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:12:27.015519 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:12:27.015599 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:12:27.030163 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:12:27.078768 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:12:27.199087 containerd[2000]: time="2025-02-13T20:12:27.198400712Z" level=info msg="shim disconnected" id=61693737953f35ea31ca59706cc6f3e8693b7b4bc0c8097a0055c1266c01d549 namespace=k8s.io Feb 13 20:12:27.199087 containerd[2000]: time="2025-02-13T20:12:27.198461655Z" level=warning msg="cleaning up after shim disconnected" id=61693737953f35ea31ca59706cc6f3e8693b7b4bc0c8097a0055c1266c01d549 namespace=k8s.io Feb 13 20:12:27.199087 containerd[2000]: time="2025-02-13T20:12:27.198473400Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:12:27.266043 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3d125c5c91cda9d100c9db5075fedae01e2dddb341b33c542b5a45ce9d7e03d-rootfs.mount: Deactivated successfully. Feb 13 20:12:27.428928 kubelet[2500]: E0213 20:12:27.428852 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:27.720185 containerd[2000]: time="2025-02-13T20:12:27.720077304Z" level=info msg="CreateContainer within sandbox \"292e9d75cca52811b8290104445103db01a0a055923bc48725a37ce4e912f64b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 20:12:27.750355 containerd[2000]: time="2025-02-13T20:12:27.749660790Z" level=info msg="CreateContainer within sandbox \"292e9d75cca52811b8290104445103db01a0a055923bc48725a37ce4e912f64b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0fded88045a90c2d1cc91e1bc05531e6297117486cfcd1ab808309ba0a18bed7\"" Feb 13 20:12:27.762070 containerd[2000]: time="2025-02-13T20:12:27.760613217Z" level=info msg="StartContainer for \"0fded88045a90c2d1cc91e1bc05531e6297117486cfcd1ab808309ba0a18bed7\"" Feb 13 20:12:27.838187 systemd[1]: run-containerd-runc-k8s.io-0fded88045a90c2d1cc91e1bc05531e6297117486cfcd1ab808309ba0a18bed7-runc.1LNgoK.mount: Deactivated successfully. Feb 13 20:12:27.902484 containerd[2000]: time="2025-02-13T20:12:27.902437539Z" level=info msg="StartContainer for \"0fded88045a90c2d1cc91e1bc05531e6297117486cfcd1ab808309ba0a18bed7\" returns successfully" Feb 13 20:12:28.064905 containerd[2000]: time="2025-02-13T20:12:28.064771014Z" level=info msg="shim disconnected" id=0fded88045a90c2d1cc91e1bc05531e6297117486cfcd1ab808309ba0a18bed7 namespace=k8s.io Feb 13 20:12:28.065587 containerd[2000]: time="2025-02-13T20:12:28.065556106Z" level=warning msg="cleaning up after shim disconnected" id=0fded88045a90c2d1cc91e1bc05531e6297117486cfcd1ab808309ba0a18bed7 namespace=k8s.io Feb 13 20:12:28.065791 containerd[2000]: time="2025-02-13T20:12:28.065775429Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:12:28.087314 containerd[2000]: time="2025-02-13T20:12:28.087258638Z" level=warning msg="cleanup warnings time=\"2025-02-13T20:12:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 20:12:28.260156 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0fded88045a90c2d1cc91e1bc05531e6297117486cfcd1ab808309ba0a18bed7-rootfs.mount: Deactivated successfully. Feb 13 20:12:28.260368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount78925492.mount: Deactivated successfully. Feb 13 20:12:28.429344 kubelet[2500]: E0213 20:12:28.429281 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:28.588916 containerd[2000]: time="2025-02-13T20:12:28.588867072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:12:28.589886 containerd[2000]: time="2025-02-13T20:12:28.589748523Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=29057858" Feb 13 20:12:28.592087 containerd[2000]: time="2025-02-13T20:12:28.591238338Z" level=info msg="ImageCreate event name:\"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:12:28.594163 containerd[2000]: time="2025-02-13T20:12:28.594128837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:12:28.594826 containerd[2000]: time="2025-02-13T20:12:28.594792517Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"29056877\" in 2.353690642s" Feb 13 20:12:28.594920 containerd[2000]: time="2025-02-13T20:12:28.594836527Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\"" Feb 13 20:12:28.597585 containerd[2000]: time="2025-02-13T20:12:28.597550288Z" level=info msg="CreateContainer within sandbox \"aba2709e11a1b717b606b743c5b62710de1dbb202b923adaff4a39535a2fe588\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:12:28.622191 containerd[2000]: time="2025-02-13T20:12:28.622142884Z" level=info msg="CreateContainer within sandbox \"aba2709e11a1b717b606b743c5b62710de1dbb202b923adaff4a39535a2fe588\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2154d5049573e3f0a6deb0a996b4d4136c9bb161249d97208ae65195549d5ce6\"" Feb 13 20:12:28.622889 containerd[2000]: time="2025-02-13T20:12:28.622847692Z" level=info msg="StartContainer for \"2154d5049573e3f0a6deb0a996b4d4136c9bb161249d97208ae65195549d5ce6\"" Feb 13 20:12:28.703096 containerd[2000]: time="2025-02-13T20:12:28.700821976Z" level=info msg="StartContainer for \"2154d5049573e3f0a6deb0a996b4d4136c9bb161249d97208ae65195549d5ce6\" returns successfully" Feb 13 20:12:28.729118 containerd[2000]: time="2025-02-13T20:12:28.729017982Z" level=info msg="CreateContainer within sandbox \"292e9d75cca52811b8290104445103db01a0a055923bc48725a37ce4e912f64b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 20:12:28.757331 containerd[2000]: time="2025-02-13T20:12:28.757290663Z" level=info msg="CreateContainer within sandbox \"292e9d75cca52811b8290104445103db01a0a055923bc48725a37ce4e912f64b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b367a41ffabcb802e9698e533e25268cb97438c7123ade9cad2f3f32bc5a29e3\"" Feb 13 20:12:28.758003 containerd[2000]: time="2025-02-13T20:12:28.757968516Z" level=info msg="StartContainer for \"b367a41ffabcb802e9698e533e25268cb97438c7123ade9cad2f3f32bc5a29e3\"" Feb 13 20:12:28.881721 containerd[2000]: time="2025-02-13T20:12:28.881549623Z" level=info msg="StartContainer for \"b367a41ffabcb802e9698e533e25268cb97438c7123ade9cad2f3f32bc5a29e3\" returns successfully" Feb 13 20:12:29.019538 containerd[2000]: time="2025-02-13T20:12:29.019296587Z" level=info msg="shim disconnected" id=b367a41ffabcb802e9698e533e25268cb97438c7123ade9cad2f3f32bc5a29e3 namespace=k8s.io Feb 13 20:12:29.019538 containerd[2000]: time="2025-02-13T20:12:29.019345128Z" level=warning msg="cleaning up after shim disconnected" id=b367a41ffabcb802e9698e533e25268cb97438c7123ade9cad2f3f32bc5a29e3 namespace=k8s.io Feb 13 20:12:29.019538 containerd[2000]: time="2025-02-13T20:12:29.019353304Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:12:29.430109 kubelet[2500]: E0213 20:12:29.430044 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:29.754915 containerd[2000]: time="2025-02-13T20:12:29.754797604Z" level=info msg="CreateContainer within sandbox \"292e9d75cca52811b8290104445103db01a0a055923bc48725a37ce4e912f64b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 20:12:29.775275 kubelet[2500]: I0213 20:12:29.775180 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vf52t" podStartSLOduration=5.144352018 podStartE2EDuration="16.773426229s" podCreationTimestamp="2025-02-13 20:12:13 +0000 UTC" firstStartedPulling="2025-02-13 20:12:16.966794911 +0000 UTC m=+4.110122305" lastFinishedPulling="2025-02-13 20:12:28.595869115 +0000 UTC m=+15.739196516" observedRunningTime="2025-02-13 20:12:28.79807354 +0000 UTC m=+15.941400946" watchObservedRunningTime="2025-02-13 20:12:29.773426229 +0000 UTC m=+16.916753636" Feb 13 20:12:29.776624 containerd[2000]: time="2025-02-13T20:12:29.776454817Z" level=info msg="CreateContainer within sandbox \"292e9d75cca52811b8290104445103db01a0a055923bc48725a37ce4e912f64b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"43c4e431e4165032c0a3f5a28359f1bbf9d856beeca87569af776384f08d07c8\"" Feb 13 20:12:29.777613 containerd[2000]: time="2025-02-13T20:12:29.777441210Z" level=info msg="StartContainer for \"43c4e431e4165032c0a3f5a28359f1bbf9d856beeca87569af776384f08d07c8\"" Feb 13 20:12:29.872662 containerd[2000]: time="2025-02-13T20:12:29.872609246Z" level=info msg="StartContainer for \"43c4e431e4165032c0a3f5a28359f1bbf9d856beeca87569af776384f08d07c8\" returns successfully" Feb 13 20:12:30.155312 kubelet[2500]: I0213 20:12:30.155151 2500 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 20:12:30.430870 kubelet[2500]: E0213 20:12:30.430705 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:30.650471 kernel: Initializing XFRM netlink socket Feb 13 20:12:30.733062 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 20:12:30.813069 kubelet[2500]: I0213 20:12:30.808740 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-g2c4s" podStartSLOduration=8.526755698 podStartE2EDuration="17.808711143s" podCreationTimestamp="2025-02-13 20:12:13 +0000 UTC" firstStartedPulling="2025-02-13 20:12:16.95558936 +0000 UTC m=+4.098916754" lastFinishedPulling="2025-02-13 20:12:26.237544802 +0000 UTC m=+13.380872199" observedRunningTime="2025-02-13 20:12:30.808435982 +0000 UTC m=+17.951763388" watchObservedRunningTime="2025-02-13 20:12:30.808711143 +0000 UTC m=+17.952038542" Feb 13 20:12:31.431376 kubelet[2500]: E0213 20:12:31.431311 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:32.417332 systemd-networkd[1566]: cilium_host: Link UP Feb 13 20:12:32.417536 systemd-networkd[1566]: cilium_net: Link UP Feb 13 20:12:32.419347 systemd-networkd[1566]: cilium_net: Gained carrier Feb 13 20:12:32.419587 systemd-networkd[1566]: cilium_host: Gained carrier Feb 13 20:12:32.419815 systemd-networkd[1566]: cilium_net: Gained IPv6LL Feb 13 20:12:32.423354 systemd-networkd[1566]: cilium_host: Gained IPv6LL Feb 13 20:12:32.429851 (udev-worker)[3201]: Network interface NamePolicy= disabled on kernel command line. Feb 13 20:12:32.431447 (udev-worker)[3008]: Network interface NamePolicy= disabled on kernel command line. Feb 13 20:12:32.440470 kubelet[2500]: E0213 20:12:32.433044 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:32.558426 systemd-networkd[1566]: cilium_vxlan: Link UP Feb 13 20:12:32.558436 systemd-networkd[1566]: cilium_vxlan: Gained carrier Feb 13 20:12:32.758709 kubelet[2500]: I0213 20:12:32.758044 2500 topology_manager.go:215] "Topology Admit Handler" podUID="54d62b7b-ae71-43b8-8207-aec4a25761b4" podNamespace="default" podName="nginx-deployment-85f456d6dd-bdfm8" Feb 13 20:12:32.840760 kubelet[2500]: I0213 20:12:32.840264 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7ls2\" (UniqueName: \"kubernetes.io/projected/54d62b7b-ae71-43b8-8207-aec4a25761b4-kube-api-access-g7ls2\") pod \"nginx-deployment-85f456d6dd-bdfm8\" (UID: \"54d62b7b-ae71-43b8-8207-aec4a25761b4\") " pod="default/nginx-deployment-85f456d6dd-bdfm8" Feb 13 20:12:32.851282 kernel: NET: Registered PF_ALG protocol family Feb 13 20:12:33.066932 containerd[2000]: time="2025-02-13T20:12:33.066813226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-bdfm8,Uid:54d62b7b-ae71-43b8-8207-aec4a25761b4,Namespace:default,Attempt:0,}" Feb 13 20:12:33.417559 kubelet[2500]: E0213 20:12:33.417390 2500 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:33.434041 kubelet[2500]: E0213 20:12:33.433992 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:33.890207 systemd-networkd[1566]: lxc_health: Link UP Feb 13 20:12:33.897745 (udev-worker)[3219]: Network interface NamePolicy= disabled on kernel command line. Feb 13 20:12:33.904756 systemd-networkd[1566]: lxc_health: Gained carrier Feb 13 20:12:34.216512 systemd-networkd[1566]: lxc6dc7889456fd: Link UP Feb 13 20:12:34.225199 kernel: eth0: renamed from tmp7933c Feb 13 20:12:34.236301 systemd-networkd[1566]: lxc6dc7889456fd: Gained carrier Feb 13 20:12:34.436323 kubelet[2500]: E0213 20:12:34.434739 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:34.556280 systemd-networkd[1566]: cilium_vxlan: Gained IPv6LL Feb 13 20:12:35.435695 kubelet[2500]: E0213 20:12:35.435630 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:35.645227 systemd-networkd[1566]: lxc6dc7889456fd: Gained IPv6LL Feb 13 20:12:35.772573 systemd-networkd[1566]: lxc_health: Gained IPv6LL Feb 13 20:12:36.436310 kubelet[2500]: E0213 20:12:36.436073 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:37.436946 kubelet[2500]: E0213 20:12:37.436864 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:38.156411 ntpd[1960]: Listen normally on 6 cilium_host 192.168.1.53:123 Feb 13 20:12:38.157846 ntpd[1960]: 13 Feb 20:12:38 ntpd[1960]: Listen normally on 6 cilium_host 192.168.1.53:123 Feb 13 20:12:38.157846 ntpd[1960]: 13 Feb 20:12:38 ntpd[1960]: Listen normally on 7 cilium_net [fe80::a852:7fff:fe7c:49fa%3]:123 Feb 13 20:12:38.157846 ntpd[1960]: 13 Feb 20:12:38 ntpd[1960]: Listen normally on 8 cilium_host [fe80::aca2:4fff:feaa:576a%4]:123 Feb 13 20:12:38.157846 ntpd[1960]: 13 Feb 20:12:38 ntpd[1960]: Listen normally on 9 cilium_vxlan [fe80::34ea:3ff:fe26:fa08%5]:123 Feb 13 20:12:38.157846 ntpd[1960]: 13 Feb 20:12:38 ntpd[1960]: Listen normally on 10 lxc_health [fe80::8c81:d6ff:fef4:f95a%7]:123 Feb 13 20:12:38.157846 ntpd[1960]: 13 Feb 20:12:38 ntpd[1960]: Listen normally on 11 lxc6dc7889456fd [fe80::608f:53ff:fe96:4827%9]:123 Feb 13 20:12:38.156515 ntpd[1960]: Listen normally on 7 cilium_net [fe80::a852:7fff:fe7c:49fa%3]:123 Feb 13 20:12:38.156660 ntpd[1960]: Listen normally on 8 cilium_host [fe80::aca2:4fff:feaa:576a%4]:123 Feb 13 20:12:38.156877 ntpd[1960]: Listen normally on 9 cilium_vxlan [fe80::34ea:3ff:fe26:fa08%5]:123 Feb 13 20:12:38.156921 ntpd[1960]: Listen normally on 10 lxc_health [fe80::8c81:d6ff:fef4:f95a%7]:123 Feb 13 20:12:38.156961 ntpd[1960]: Listen normally on 11 lxc6dc7889456fd [fe80::608f:53ff:fe96:4827%9]:123 Feb 13 20:12:38.438400 kubelet[2500]: E0213 20:12:38.437204 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:39.438434 kubelet[2500]: E0213 20:12:39.438348 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:40.438739 kubelet[2500]: E0213 20:12:40.438674 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:40.963293 containerd[2000]: time="2025-02-13T20:12:40.963153837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:12:40.963293 containerd[2000]: time="2025-02-13T20:12:40.963237116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:12:40.963293 containerd[2000]: time="2025-02-13T20:12:40.963258248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:12:40.964166 containerd[2000]: time="2025-02-13T20:12:40.963368257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:12:41.053343 containerd[2000]: time="2025-02-13T20:12:41.053305151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-bdfm8,Uid:54d62b7b-ae71-43b8-8207-aec4a25761b4,Namespace:default,Attempt:0,} returns sandbox id \"7933c434a2dde416cdc1256bcf02ed85b05ea4daaeeabab28d156a6ca1496b46\"" Feb 13 20:12:41.055630 containerd[2000]: time="2025-02-13T20:12:41.055339535Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 20:12:41.439249 kubelet[2500]: E0213 20:12:41.439211 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:42.440856 kubelet[2500]: E0213 20:12:42.440397 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:43.442383 kubelet[2500]: E0213 20:12:43.441198 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:43.923369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3319394676.mount: Deactivated successfully. Feb 13 20:12:44.441631 kubelet[2500]: E0213 20:12:44.441590 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:44.585212 update_engine[1975]: I20250213 20:12:44.584193 1975 update_attempter.cc:509] Updating boot flags... Feb 13 20:12:44.726506 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3629) Feb 13 20:12:45.074169 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3630) Feb 13 20:12:45.443663 kubelet[2500]: E0213 20:12:45.443498 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:45.518784 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3630) Feb 13 20:12:46.412917 containerd[2000]: time="2025-02-13T20:12:46.412858562Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:12:46.415112 containerd[2000]: time="2025-02-13T20:12:46.415042689Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73054493" Feb 13 20:12:46.415962 containerd[2000]: time="2025-02-13T20:12:46.415586860Z" level=info msg="ImageCreate event name:\"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:12:46.421094 containerd[2000]: time="2025-02-13T20:12:46.419688097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:12:46.421262 containerd[2000]: time="2025-02-13T20:12:46.421223073Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 5.365825155s" Feb 13 20:12:46.421332 containerd[2000]: time="2025-02-13T20:12:46.421267050Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 20:12:46.425620 containerd[2000]: time="2025-02-13T20:12:46.425584983Z" level=info msg="CreateContainer within sandbox \"7933c434a2dde416cdc1256bcf02ed85b05ea4daaeeabab28d156a6ca1496b46\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 20:12:46.448870 kubelet[2500]: E0213 20:12:46.448621 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:46.463945 containerd[2000]: time="2025-02-13T20:12:46.463867672Z" level=info msg="CreateContainer within sandbox \"7933c434a2dde416cdc1256bcf02ed85b05ea4daaeeabab28d156a6ca1496b46\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"931eb66456d4ff50816e49588ec8271a3ec84e76644c309713b3ceeef7d01516\"" Feb 13 20:12:46.465934 containerd[2000]: time="2025-02-13T20:12:46.465039164Z" level=info msg="StartContainer for \"931eb66456d4ff50816e49588ec8271a3ec84e76644c309713b3ceeef7d01516\"" Feb 13 20:12:46.547527 containerd[2000]: time="2025-02-13T20:12:46.547425737Z" level=info msg="StartContainer for \"931eb66456d4ff50816e49588ec8271a3ec84e76644c309713b3ceeef7d01516\" returns successfully" Feb 13 20:12:46.849715 kubelet[2500]: I0213 20:12:46.849637 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-bdfm8" podStartSLOduration=9.482103408 podStartE2EDuration="14.849619988s" podCreationTimestamp="2025-02-13 20:12:32 +0000 UTC" firstStartedPulling="2025-02-13 20:12:41.055012143 +0000 UTC m=+28.198339534" lastFinishedPulling="2025-02-13 20:12:46.422528729 +0000 UTC m=+33.565856114" observedRunningTime="2025-02-13 20:12:46.848928894 +0000 UTC m=+33.992256300" watchObservedRunningTime="2025-02-13 20:12:46.849619988 +0000 UTC m=+33.992947394" Feb 13 20:12:47.449679 kubelet[2500]: E0213 20:12:47.449623 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:48.450313 kubelet[2500]: E0213 20:12:48.450255 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:49.451638 kubelet[2500]: E0213 20:12:49.451573 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:50.452180 kubelet[2500]: E0213 20:12:50.452119 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:51.452457 kubelet[2500]: E0213 20:12:51.452292 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:52.453538 kubelet[2500]: E0213 20:12:52.453481 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:52.740148 kubelet[2500]: I0213 20:12:52.739239 2500 topology_manager.go:215] "Topology Admit Handler" podUID="b4b70e89-e556-466a-aa08-5da80dc58f48" podNamespace="default" podName="nfs-server-provisioner-0" Feb 13 20:12:52.797157 kubelet[2500]: I0213 20:12:52.797114 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pjwj\" (UniqueName: \"kubernetes.io/projected/b4b70e89-e556-466a-aa08-5da80dc58f48-kube-api-access-6pjwj\") pod \"nfs-server-provisioner-0\" (UID: \"b4b70e89-e556-466a-aa08-5da80dc58f48\") " pod="default/nfs-server-provisioner-0" Feb 13 20:12:52.797423 kubelet[2500]: I0213 20:12:52.797398 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/b4b70e89-e556-466a-aa08-5da80dc58f48-data\") pod \"nfs-server-provisioner-0\" (UID: \"b4b70e89-e556-466a-aa08-5da80dc58f48\") " pod="default/nfs-server-provisioner-0" Feb 13 20:12:53.059249 containerd[2000]: time="2025-02-13T20:12:53.059134923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:b4b70e89-e556-466a-aa08-5da80dc58f48,Namespace:default,Attempt:0,}" Feb 13 20:12:53.142236 systemd-networkd[1566]: lxcfa86388f9030: Link UP Feb 13 20:12:53.159333 kernel: eth0: renamed from tmpb9322 Feb 13 20:12:53.161857 (udev-worker)[3960]: Network interface NamePolicy= disabled on kernel command line. Feb 13 20:12:53.163525 systemd-networkd[1566]: lxcfa86388f9030: Gained carrier Feb 13 20:12:53.418158 kubelet[2500]: E0213 20:12:53.418109 2500 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:53.454532 kubelet[2500]: E0213 20:12:53.454456 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:53.541396 containerd[2000]: time="2025-02-13T20:12:53.541260040Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:12:53.543195 containerd[2000]: time="2025-02-13T20:12:53.541334412Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:12:53.543195 containerd[2000]: time="2025-02-13T20:12:53.542852369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:12:53.543536 containerd[2000]: time="2025-02-13T20:12:53.543256016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:12:53.630885 containerd[2000]: time="2025-02-13T20:12:53.630846149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:b4b70e89-e556-466a-aa08-5da80dc58f48,Namespace:default,Attempt:0,} returns sandbox id \"b932203fdae08811f692b6dd1399cca0efff6ac1415892bc0bfe454964ed060c\"" Feb 13 20:12:53.633242 containerd[2000]: time="2025-02-13T20:12:53.633203029Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 20:12:53.916982 systemd[1]: run-containerd-runc-k8s.io-b932203fdae08811f692b6dd1399cca0efff6ac1415892bc0bfe454964ed060c-runc.hpWpJe.mount: Deactivated successfully. Feb 13 20:12:54.206900 systemd-networkd[1566]: lxcfa86388f9030: Gained IPv6LL Feb 13 20:12:54.456155 kubelet[2500]: E0213 20:12:54.456069 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:55.456374 kubelet[2500]: E0213 20:12:55.456330 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:56.457221 kubelet[2500]: E0213 20:12:56.457182 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:56.754996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount319796045.mount: Deactivated successfully. Feb 13 20:12:57.158001 ntpd[1960]: Listen normally on 12 lxcfa86388f9030 [fe80::e49d:aaff:fe32:501%11]:123 Feb 13 20:12:57.160937 ntpd[1960]: 13 Feb 20:12:57 ntpd[1960]: Listen normally on 12 lxcfa86388f9030 [fe80::e49d:aaff:fe32:501%11]:123 Feb 13 20:12:57.462228 kubelet[2500]: E0213 20:12:57.462077 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:58.464317 kubelet[2500]: E0213 20:12:58.464267 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:59.464900 kubelet[2500]: E0213 20:12:59.464856 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:12:59.484632 containerd[2000]: time="2025-02-13T20:12:59.484572471Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:12:59.486217 containerd[2000]: time="2025-02-13T20:12:59.486082432Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Feb 13 20:12:59.488088 containerd[2000]: time="2025-02-13T20:12:59.487198655Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:12:59.492500 containerd[2000]: time="2025-02-13T20:12:59.492431429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:12:59.493577 containerd[2000]: time="2025-02-13T20:12:59.493534703Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 5.860292244s" Feb 13 20:12:59.493950 containerd[2000]: time="2025-02-13T20:12:59.493719027Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 13 20:12:59.496520 containerd[2000]: time="2025-02-13T20:12:59.496483555Z" level=info msg="CreateContainer within sandbox \"b932203fdae08811f692b6dd1399cca0efff6ac1415892bc0bfe454964ed060c\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 20:12:59.516230 containerd[2000]: time="2025-02-13T20:12:59.516178319Z" level=info msg="CreateContainer within sandbox \"b932203fdae08811f692b6dd1399cca0efff6ac1415892bc0bfe454964ed060c\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"37dc94c92280059b433b2586516a0fbdcd845bb2818a6d4628f04d69e056acb0\"" Feb 13 20:12:59.517324 containerd[2000]: time="2025-02-13T20:12:59.517198963Z" level=info msg="StartContainer for \"37dc94c92280059b433b2586516a0fbdcd845bb2818a6d4628f04d69e056acb0\"" Feb 13 20:12:59.603993 containerd[2000]: time="2025-02-13T20:12:59.603944886Z" level=info msg="StartContainer for \"37dc94c92280059b433b2586516a0fbdcd845bb2818a6d4628f04d69e056acb0\" returns successfully" Feb 13 20:13:00.466049 kubelet[2500]: E0213 20:13:00.466005 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:01.467098 kubelet[2500]: E0213 20:13:01.467031 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:02.468325 kubelet[2500]: E0213 20:13:02.468013 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:03.469664 kubelet[2500]: E0213 20:13:03.469488 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:04.470567 kubelet[2500]: E0213 20:13:04.470516 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:05.471295 kubelet[2500]: E0213 20:13:05.471065 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:06.471750 kubelet[2500]: E0213 20:13:06.471694 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:07.472120 kubelet[2500]: E0213 20:13:07.472042 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:08.473262 kubelet[2500]: E0213 20:13:08.473063 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:09.005940 kubelet[2500]: I0213 20:13:09.005872 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=11.143541989 podStartE2EDuration="17.005852763s" podCreationTimestamp="2025-02-13 20:12:52 +0000 UTC" firstStartedPulling="2025-02-13 20:12:53.632503999 +0000 UTC m=+40.775831384" lastFinishedPulling="2025-02-13 20:12:59.494814759 +0000 UTC m=+46.638142158" observedRunningTime="2025-02-13 20:12:59.933262381 +0000 UTC m=+47.076589788" watchObservedRunningTime="2025-02-13 20:13:09.005852763 +0000 UTC m=+56.149180169" Feb 13 20:13:09.006265 kubelet[2500]: I0213 20:13:09.006021 2500 topology_manager.go:215] "Topology Admit Handler" podUID="f6a0a77c-4da6-4cc7-b4e9-6ad568a35f40" podNamespace="default" podName="test-pod-1" Feb 13 20:13:09.032700 kubelet[2500]: I0213 20:13:09.032634 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a59f9c9b-54d5-49b2-924b-48c326ef2e81\" (UniqueName: \"kubernetes.io/nfs/f6a0a77c-4da6-4cc7-b4e9-6ad568a35f40-pvc-a59f9c9b-54d5-49b2-924b-48c326ef2e81\") pod \"test-pod-1\" (UID: \"f6a0a77c-4da6-4cc7-b4e9-6ad568a35f40\") " pod="default/test-pod-1" Feb 13 20:13:09.032700 kubelet[2500]: I0213 20:13:09.032696 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmp4x\" (UniqueName: \"kubernetes.io/projected/f6a0a77c-4da6-4cc7-b4e9-6ad568a35f40-kube-api-access-cmp4x\") pod \"test-pod-1\" (UID: \"f6a0a77c-4da6-4cc7-b4e9-6ad568a35f40\") " pod="default/test-pod-1" Feb 13 20:13:09.192097 kernel: FS-Cache: Loaded Feb 13 20:13:09.280407 kernel: RPC: Registered named UNIX socket transport module. Feb 13 20:13:09.280538 kernel: RPC: Registered udp transport module. Feb 13 20:13:09.280580 kernel: RPC: Registered tcp transport module. Feb 13 20:13:09.281566 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 20:13:09.281662 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 20:13:09.474075 kubelet[2500]: E0213 20:13:09.473944 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:09.790226 kernel: NFS: Registering the id_resolver key type Feb 13 20:13:09.790615 kernel: Key type id_resolver registered Feb 13 20:13:09.790669 kernel: Key type id_legacy registered Feb 13 20:13:09.850601 nfsidmap[4144]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 13 20:13:09.856034 nfsidmap[4145]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 13 20:13:09.910493 containerd[2000]: time="2025-02-13T20:13:09.910450844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f6a0a77c-4da6-4cc7-b4e9-6ad568a35f40,Namespace:default,Attempt:0,}" Feb 13 20:13:09.953575 (udev-worker)[4133]: Network interface NamePolicy= disabled on kernel command line. Feb 13 20:13:09.953912 systemd-networkd[1566]: lxc5384bfe5b80c: Link UP Feb 13 20:13:09.962070 kernel: eth0: renamed from tmpd1f01 Feb 13 20:13:09.964652 (udev-worker)[4137]: Network interface NamePolicy= disabled on kernel command line. Feb 13 20:13:09.970151 systemd-networkd[1566]: lxc5384bfe5b80c: Gained carrier Feb 13 20:13:10.278207 containerd[2000]: time="2025-02-13T20:13:10.277813939Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:13:10.278207 containerd[2000]: time="2025-02-13T20:13:10.277888904Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:13:10.278467 containerd[2000]: time="2025-02-13T20:13:10.278009212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:13:10.279278 containerd[2000]: time="2025-02-13T20:13:10.279205966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:13:10.395160 containerd[2000]: time="2025-02-13T20:13:10.395105259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f6a0a77c-4da6-4cc7-b4e9-6ad568a35f40,Namespace:default,Attempt:0,} returns sandbox id \"d1f019e06334fa57ca286c1770a67d416fe0df75185da6dc1de9a1107b8e834c\"" Feb 13 20:13:10.397178 containerd[2000]: time="2025-02-13T20:13:10.397102197Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 20:13:10.474810 kubelet[2500]: E0213 20:13:10.474727 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:10.736005 containerd[2000]: time="2025-02-13T20:13:10.735955354Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:13:10.737118 containerd[2000]: time="2025-02-13T20:13:10.737044513Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 20:13:10.740387 containerd[2000]: time="2025-02-13T20:13:10.740337644Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 343.193766ms" Feb 13 20:13:10.740387 containerd[2000]: time="2025-02-13T20:13:10.740379058Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 20:13:10.743643 containerd[2000]: time="2025-02-13T20:13:10.743606328Z" level=info msg="CreateContainer within sandbox \"d1f019e06334fa57ca286c1770a67d416fe0df75185da6dc1de9a1107b8e834c\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 20:13:10.765089 containerd[2000]: time="2025-02-13T20:13:10.765007213Z" level=info msg="CreateContainer within sandbox \"d1f019e06334fa57ca286c1770a67d416fe0df75185da6dc1de9a1107b8e834c\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"4ca3da0f2e3e060f8ec82f868258801c9b78ae7dd962ad3f2cd7aee681ffddbd\"" Feb 13 20:13:10.767133 containerd[2000]: time="2025-02-13T20:13:10.767089876Z" level=info msg="StartContainer for \"4ca3da0f2e3e060f8ec82f868258801c9b78ae7dd962ad3f2cd7aee681ffddbd\"" Feb 13 20:13:10.863712 containerd[2000]: time="2025-02-13T20:13:10.863667389Z" level=info msg="StartContainer for \"4ca3da0f2e3e060f8ec82f868258801c9b78ae7dd962ad3f2cd7aee681ffddbd\" returns successfully" Feb 13 20:13:10.959504 kubelet[2500]: I0213 20:13:10.959418 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.614783641 podStartE2EDuration="17.95939997s" podCreationTimestamp="2025-02-13 20:12:53 +0000 UTC" firstStartedPulling="2025-02-13 20:13:10.396741835 +0000 UTC m=+57.540069221" lastFinishedPulling="2025-02-13 20:13:10.741358164 +0000 UTC m=+57.884685550" observedRunningTime="2025-02-13 20:13:10.959178853 +0000 UTC m=+58.102506260" watchObservedRunningTime="2025-02-13 20:13:10.95939997 +0000 UTC m=+58.102727381" Feb 13 20:13:11.420435 systemd-networkd[1566]: lxc5384bfe5b80c: Gained IPv6LL Feb 13 20:13:11.475297 kubelet[2500]: E0213 20:13:11.475246 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:12.475631 kubelet[2500]: E0213 20:13:12.475580 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:13.417536 kubelet[2500]: E0213 20:13:13.417479 2500 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:13.477067 kubelet[2500]: E0213 20:13:13.476018 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:14.156180 ntpd[1960]: Listen normally on 13 lxc5384bfe5b80c [fe80::ecb6:22ff:fe50:df89%13]:123 Feb 13 20:13:14.156635 ntpd[1960]: 13 Feb 20:13:14 ntpd[1960]: Listen normally on 13 lxc5384bfe5b80c [fe80::ecb6:22ff:fe50:df89%13]:123 Feb 13 20:13:14.476652 kubelet[2500]: E0213 20:13:14.476498 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:15.476990 kubelet[2500]: E0213 20:13:15.476946 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:16.477295 kubelet[2500]: E0213 20:13:16.477236 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:17.472564 systemd[1]: run-containerd-runc-k8s.io-43c4e431e4165032c0a3f5a28359f1bbf9d856beeca87569af776384f08d07c8-runc.sNQVlX.mount: Deactivated successfully. Feb 13 20:13:17.479588 kubelet[2500]: E0213 20:13:17.479532 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:17.770543 containerd[2000]: time="2025-02-13T20:13:17.770411622Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:13:17.959172 containerd[2000]: time="2025-02-13T20:13:17.959125702Z" level=info msg="StopContainer for \"43c4e431e4165032c0a3f5a28359f1bbf9d856beeca87569af776384f08d07c8\" with timeout 2 (s)" Feb 13 20:13:17.959523 containerd[2000]: time="2025-02-13T20:13:17.959491589Z" level=info msg="Stop container \"43c4e431e4165032c0a3f5a28359f1bbf9d856beeca87569af776384f08d07c8\" with signal terminated" Feb 13 20:13:17.968426 systemd-networkd[1566]: lxc_health: Link DOWN Feb 13 20:13:17.968442 systemd-networkd[1566]: lxc_health: Lost carrier Feb 13 20:13:18.091298 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43c4e431e4165032c0a3f5a28359f1bbf9d856beeca87569af776384f08d07c8-rootfs.mount: Deactivated successfully. Feb 13 20:13:18.157246 containerd[2000]: time="2025-02-13T20:13:18.117713215Z" level=info msg="shim disconnected" id=43c4e431e4165032c0a3f5a28359f1bbf9d856beeca87569af776384f08d07c8 namespace=k8s.io Feb 13 20:13:18.157774 containerd[2000]: time="2025-02-13T20:13:18.157250553Z" level=warning msg="cleaning up after shim disconnected" id=43c4e431e4165032c0a3f5a28359f1bbf9d856beeca87569af776384f08d07c8 namespace=k8s.io Feb 13 20:13:18.157774 containerd[2000]: time="2025-02-13T20:13:18.157269465Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:13:18.192540 containerd[2000]: time="2025-02-13T20:13:18.192491734Z" level=info msg="StopContainer for \"43c4e431e4165032c0a3f5a28359f1bbf9d856beeca87569af776384f08d07c8\" returns successfully" Feb 13 20:13:18.202860 containerd[2000]: time="2025-02-13T20:13:18.202819888Z" level=info msg="StopPodSandbox for \"292e9d75cca52811b8290104445103db01a0a055923bc48725a37ce4e912f64b\"" Feb 13 20:13:18.203012 containerd[2000]: time="2025-02-13T20:13:18.202867488Z" level=info msg="Container to stop \"0fded88045a90c2d1cc91e1bc05531e6297117486cfcd1ab808309ba0a18bed7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:13:18.203012 containerd[2000]: time="2025-02-13T20:13:18.202885368Z" level=info msg="Container to stop \"43c4e431e4165032c0a3f5a28359f1bbf9d856beeca87569af776384f08d07c8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:13:18.203012 containerd[2000]: time="2025-02-13T20:13:18.202900738Z" level=info msg="Container to stop \"f3d125c5c91cda9d100c9db5075fedae01e2dddb341b33c542b5a45ce9d7e03d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:13:18.203012 containerd[2000]: time="2025-02-13T20:13:18.202913821Z" level=info msg="Container to stop \"61693737953f35ea31ca59706cc6f3e8693b7b4bc0c8097a0055c1266c01d549\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:13:18.203012 containerd[2000]: time="2025-02-13T20:13:18.202926598Z" level=info msg="Container to stop \"b367a41ffabcb802e9698e533e25268cb97438c7123ade9cad2f3f32bc5a29e3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:13:18.227500 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-292e9d75cca52811b8290104445103db01a0a055923bc48725a37ce4e912f64b-shm.mount: Deactivated successfully. Feb 13 20:13:18.266995 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-292e9d75cca52811b8290104445103db01a0a055923bc48725a37ce4e912f64b-rootfs.mount: Deactivated successfully. Feb 13 20:13:18.279636 containerd[2000]: time="2025-02-13T20:13:18.279563490Z" level=info msg="shim disconnected" id=292e9d75cca52811b8290104445103db01a0a055923bc48725a37ce4e912f64b namespace=k8s.io Feb 13 20:13:18.279636 containerd[2000]: time="2025-02-13T20:13:18.279629662Z" level=warning msg="cleaning up after shim disconnected" id=292e9d75cca52811b8290104445103db01a0a055923bc48725a37ce4e912f64b namespace=k8s.io Feb 13 20:13:18.279636 containerd[2000]: time="2025-02-13T20:13:18.279640613Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:13:18.304659 containerd[2000]: time="2025-02-13T20:13:18.304590959Z" level=info msg="TearDown network for sandbox \"292e9d75cca52811b8290104445103db01a0a055923bc48725a37ce4e912f64b\" successfully" Feb 13 20:13:18.304659 containerd[2000]: time="2025-02-13T20:13:18.304646499Z" level=info msg="StopPodSandbox for \"292e9d75cca52811b8290104445103db01a0a055923bc48725a37ce4e912f64b\" returns successfully" Feb 13 20:13:18.480790 kubelet[2500]: E0213 20:13:18.480661 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:18.485421 kubelet[2500]: I0213 20:13:18.484895 2500 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/719c03e5-a3e9-48d1-819e-8eff8acb5c54-bpf-maps\") pod \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\" (UID: \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\") " Feb 13 20:13:18.485421 kubelet[2500]: I0213 20:13:18.484969 2500 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2bcb\" (UniqueName: \"kubernetes.io/projected/719c03e5-a3e9-48d1-819e-8eff8acb5c54-kube-api-access-l2bcb\") pod \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\" (UID: \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\") " Feb 13 20:13:18.485421 kubelet[2500]: I0213 20:13:18.484998 2500 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/719c03e5-a3e9-48d1-819e-8eff8acb5c54-cilium-cgroup\") pod \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\" (UID: \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\") " Feb 13 20:13:18.485421 kubelet[2500]: I0213 20:13:18.485009 2500 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/719c03e5-a3e9-48d1-819e-8eff8acb5c54-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "719c03e5-a3e9-48d1-819e-8eff8acb5c54" (UID: "719c03e5-a3e9-48d1-819e-8eff8acb5c54"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:13:18.485421 kubelet[2500]: I0213 20:13:18.485024 2500 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/719c03e5-a3e9-48d1-819e-8eff8acb5c54-hubble-tls\") pod \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\" (UID: \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\") " Feb 13 20:13:18.485421 kubelet[2500]: I0213 20:13:18.485082 2500 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/719c03e5-a3e9-48d1-819e-8eff8acb5c54-cilium-run\") pod \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\" (UID: \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\") " Feb 13 20:13:18.485796 kubelet[2500]: I0213 20:13:18.485103 2500 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/719c03e5-a3e9-48d1-819e-8eff8acb5c54-xtables-lock\") pod \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\" (UID: \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\") " Feb 13 20:13:18.485796 kubelet[2500]: I0213 20:13:18.485130 2500 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/719c03e5-a3e9-48d1-819e-8eff8acb5c54-clustermesh-secrets\") pod \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\" (UID: \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\") " Feb 13 20:13:18.485796 kubelet[2500]: I0213 20:13:18.485153 2500 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/719c03e5-a3e9-48d1-819e-8eff8acb5c54-etc-cni-netd\") pod \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\" (UID: \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\") " Feb 13 20:13:18.485796 kubelet[2500]: I0213 20:13:18.485173 2500 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/719c03e5-a3e9-48d1-819e-8eff8acb5c54-lib-modules\") pod \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\" (UID: \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\") " Feb 13 20:13:18.485796 kubelet[2500]: I0213 20:13:18.485198 2500 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/719c03e5-a3e9-48d1-819e-8eff8acb5c54-cilium-config-path\") pod \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\" (UID: \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\") " Feb 13 20:13:18.485796 kubelet[2500]: I0213 20:13:18.485222 2500 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/719c03e5-a3e9-48d1-819e-8eff8acb5c54-host-proc-sys-net\") pod \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\" (UID: \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\") " Feb 13 20:13:18.486082 kubelet[2500]: I0213 20:13:18.485247 2500 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/719c03e5-a3e9-48d1-819e-8eff8acb5c54-host-proc-sys-kernel\") pod \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\" (UID: \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\") " Feb 13 20:13:18.486082 kubelet[2500]: I0213 20:13:18.485270 2500 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/719c03e5-a3e9-48d1-819e-8eff8acb5c54-hostproc\") pod \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\" (UID: \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\") " Feb 13 20:13:18.486082 kubelet[2500]: I0213 20:13:18.485289 2500 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/719c03e5-a3e9-48d1-819e-8eff8acb5c54-cni-path\") pod \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\" (UID: \"719c03e5-a3e9-48d1-819e-8eff8acb5c54\") " Feb 13 20:13:18.486082 kubelet[2500]: I0213 20:13:18.485328 2500 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/719c03e5-a3e9-48d1-819e-8eff8acb5c54-bpf-maps\") on node \"172.31.17.230\" DevicePath \"\"" Feb 13 20:13:18.486082 kubelet[2500]: I0213 20:13:18.485364 2500 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/719c03e5-a3e9-48d1-819e-8eff8acb5c54-cni-path" (OuterVolumeSpecName: "cni-path") pod "719c03e5-a3e9-48d1-819e-8eff8acb5c54" (UID: "719c03e5-a3e9-48d1-819e-8eff8acb5c54"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:13:18.486082 kubelet[2500]: I0213 20:13:18.485390 2500 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/719c03e5-a3e9-48d1-819e-8eff8acb5c54-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "719c03e5-a3e9-48d1-819e-8eff8acb5c54" (UID: "719c03e5-a3e9-48d1-819e-8eff8acb5c54"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:13:18.498157 kubelet[2500]: I0213 20:13:18.491214 2500 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/719c03e5-a3e9-48d1-819e-8eff8acb5c54-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "719c03e5-a3e9-48d1-819e-8eff8acb5c54" (UID: "719c03e5-a3e9-48d1-819e-8eff8acb5c54"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:13:18.498157 kubelet[2500]: I0213 20:13:18.491263 2500 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/719c03e5-a3e9-48d1-819e-8eff8acb5c54-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "719c03e5-a3e9-48d1-819e-8eff8acb5c54" (UID: "719c03e5-a3e9-48d1-819e-8eff8acb5c54"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:13:18.498157 kubelet[2500]: I0213 20:13:18.491400 2500 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/719c03e5-a3e9-48d1-819e-8eff8acb5c54-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "719c03e5-a3e9-48d1-819e-8eff8acb5c54" (UID: "719c03e5-a3e9-48d1-819e-8eff8acb5c54"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 20:13:18.500665 systemd[1]: var-lib-kubelet-pods-719c03e5\x2da3e9\x2d48d1\x2d819e\x2d8eff8acb5c54-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl2bcb.mount: Deactivated successfully. Feb 13 20:13:18.500887 systemd[1]: var-lib-kubelet-pods-719c03e5\x2da3e9\x2d48d1\x2d819e\x2d8eff8acb5c54-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 20:13:18.503008 kubelet[2500]: I0213 20:13:18.502970 2500 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/719c03e5-a3e9-48d1-819e-8eff8acb5c54-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "719c03e5-a3e9-48d1-819e-8eff8acb5c54" (UID: "719c03e5-a3e9-48d1-819e-8eff8acb5c54"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:13:18.503210 kubelet[2500]: I0213 20:13:18.503192 2500 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/719c03e5-a3e9-48d1-819e-8eff8acb5c54-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "719c03e5-a3e9-48d1-819e-8eff8acb5c54" (UID: "719c03e5-a3e9-48d1-819e-8eff8acb5c54"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:13:18.503393 kubelet[2500]: I0213 20:13:18.503373 2500 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/719c03e5-a3e9-48d1-819e-8eff8acb5c54-kube-api-access-l2bcb" (OuterVolumeSpecName: "kube-api-access-l2bcb") pod "719c03e5-a3e9-48d1-819e-8eff8acb5c54" (UID: "719c03e5-a3e9-48d1-819e-8eff8acb5c54"). InnerVolumeSpecName "kube-api-access-l2bcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 20:13:18.511416 kubelet[2500]: I0213 20:13:18.509894 2500 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/719c03e5-a3e9-48d1-819e-8eff8acb5c54-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "719c03e5-a3e9-48d1-819e-8eff8acb5c54" (UID: "719c03e5-a3e9-48d1-819e-8eff8acb5c54"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 20:13:18.511644 kubelet[2500]: I0213 20:13:18.511490 2500 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/719c03e5-a3e9-48d1-819e-8eff8acb5c54-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "719c03e5-a3e9-48d1-819e-8eff8acb5c54" (UID: "719c03e5-a3e9-48d1-819e-8eff8acb5c54"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:13:18.511644 kubelet[2500]: I0213 20:13:18.511542 2500 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/719c03e5-a3e9-48d1-819e-8eff8acb5c54-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "719c03e5-a3e9-48d1-819e-8eff8acb5c54" (UID: "719c03e5-a3e9-48d1-819e-8eff8acb5c54"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:13:18.512282 kubelet[2500]: I0213 20:13:18.511563 2500 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/719c03e5-a3e9-48d1-819e-8eff8acb5c54-hostproc" (OuterVolumeSpecName: "hostproc") pod "719c03e5-a3e9-48d1-819e-8eff8acb5c54" (UID: "719c03e5-a3e9-48d1-819e-8eff8acb5c54"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:13:18.514608 kubelet[2500]: I0213 20:13:18.514539 2500 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/719c03e5-a3e9-48d1-819e-8eff8acb5c54-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "719c03e5-a3e9-48d1-819e-8eff8acb5c54" (UID: "719c03e5-a3e9-48d1-819e-8eff8acb5c54"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 20:13:18.523808 systemd[1]: var-lib-kubelet-pods-719c03e5\x2da3e9\x2d48d1\x2d819e\x2d8eff8acb5c54-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 20:13:18.543658 kubelet[2500]: E0213 20:13:18.543617 2500 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:13:18.586336 kubelet[2500]: I0213 20:13:18.586283 2500 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/719c03e5-a3e9-48d1-819e-8eff8acb5c54-clustermesh-secrets\") on node \"172.31.17.230\" DevicePath \"\"" Feb 13 20:13:18.586336 kubelet[2500]: I0213 20:13:18.586322 2500 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/719c03e5-a3e9-48d1-819e-8eff8acb5c54-etc-cni-netd\") on node \"172.31.17.230\" DevicePath \"\"" Feb 13 20:13:18.586336 kubelet[2500]: I0213 20:13:18.586334 2500 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/719c03e5-a3e9-48d1-819e-8eff8acb5c54-lib-modules\") on node \"172.31.17.230\" DevicePath \"\"" Feb 13 20:13:18.586336 kubelet[2500]: I0213 20:13:18.586345 2500 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/719c03e5-a3e9-48d1-819e-8eff8acb5c54-cilium-config-path\") on node \"172.31.17.230\" DevicePath \"\"" Feb 13 20:13:18.586596 kubelet[2500]: I0213 20:13:18.586357 2500 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/719c03e5-a3e9-48d1-819e-8eff8acb5c54-host-proc-sys-net\") on node \"172.31.17.230\" DevicePath \"\"" Feb 13 20:13:18.586596 kubelet[2500]: I0213 20:13:18.586368 2500 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/719c03e5-a3e9-48d1-819e-8eff8acb5c54-host-proc-sys-kernel\") on node \"172.31.17.230\" DevicePath \"\"" Feb 13 20:13:18.586596 kubelet[2500]: I0213 20:13:18.586379 2500 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/719c03e5-a3e9-48d1-819e-8eff8acb5c54-hostproc\") on node \"172.31.17.230\" DevicePath \"\"" Feb 13 20:13:18.586596 kubelet[2500]: I0213 20:13:18.586389 2500 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/719c03e5-a3e9-48d1-819e-8eff8acb5c54-cni-path\") on node \"172.31.17.230\" DevicePath \"\"" Feb 13 20:13:18.586596 kubelet[2500]: I0213 20:13:18.586398 2500 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/719c03e5-a3e9-48d1-819e-8eff8acb5c54-cilium-cgroup\") on node \"172.31.17.230\" DevicePath \"\"" Feb 13 20:13:18.586596 kubelet[2500]: I0213 20:13:18.586408 2500 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/719c03e5-a3e9-48d1-819e-8eff8acb5c54-hubble-tls\") on node \"172.31.17.230\" DevicePath \"\"" Feb 13 20:13:18.586596 kubelet[2500]: I0213 20:13:18.586417 2500 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/719c03e5-a3e9-48d1-819e-8eff8acb5c54-cilium-run\") on node \"172.31.17.230\" DevicePath \"\"" Feb 13 20:13:18.586596 kubelet[2500]: I0213 20:13:18.586427 2500 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/719c03e5-a3e9-48d1-819e-8eff8acb5c54-xtables-lock\") on node \"172.31.17.230\" DevicePath \"\"" Feb 13 20:13:18.586792 kubelet[2500]: I0213 20:13:18.586438 2500 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-l2bcb\" (UniqueName: \"kubernetes.io/projected/719c03e5-a3e9-48d1-819e-8eff8acb5c54-kube-api-access-l2bcb\") on node \"172.31.17.230\" DevicePath \"\"" Feb 13 20:13:18.964542 kubelet[2500]: I0213 20:13:18.964511 2500 scope.go:117] "RemoveContainer" containerID="43c4e431e4165032c0a3f5a28359f1bbf9d856beeca87569af776384f08d07c8" Feb 13 20:13:18.999219 containerd[2000]: time="2025-02-13T20:13:18.978207539Z" level=info msg="RemoveContainer for \"43c4e431e4165032c0a3f5a28359f1bbf9d856beeca87569af776384f08d07c8\"" Feb 13 20:13:19.017826 containerd[2000]: time="2025-02-13T20:13:19.017778754Z" level=info msg="RemoveContainer for \"43c4e431e4165032c0a3f5a28359f1bbf9d856beeca87569af776384f08d07c8\" returns successfully" Feb 13 20:13:19.018320 kubelet[2500]: I0213 20:13:19.018288 2500 scope.go:117] "RemoveContainer" containerID="b367a41ffabcb802e9698e533e25268cb97438c7123ade9cad2f3f32bc5a29e3" Feb 13 20:13:19.019819 containerd[2000]: time="2025-02-13T20:13:19.019778147Z" level=info msg="RemoveContainer for \"b367a41ffabcb802e9698e533e25268cb97438c7123ade9cad2f3f32bc5a29e3\"" Feb 13 20:13:19.023486 containerd[2000]: time="2025-02-13T20:13:19.023444701Z" level=info msg="RemoveContainer for \"b367a41ffabcb802e9698e533e25268cb97438c7123ade9cad2f3f32bc5a29e3\" returns successfully" Feb 13 20:13:19.023738 kubelet[2500]: I0213 20:13:19.023716 2500 scope.go:117] "RemoveContainer" containerID="0fded88045a90c2d1cc91e1bc05531e6297117486cfcd1ab808309ba0a18bed7" Feb 13 20:13:19.025141 containerd[2000]: time="2025-02-13T20:13:19.025107875Z" level=info msg="RemoveContainer for \"0fded88045a90c2d1cc91e1bc05531e6297117486cfcd1ab808309ba0a18bed7\"" Feb 13 20:13:19.037745 containerd[2000]: time="2025-02-13T20:13:19.037694265Z" level=info msg="RemoveContainer for \"0fded88045a90c2d1cc91e1bc05531e6297117486cfcd1ab808309ba0a18bed7\" returns successfully" Feb 13 20:13:19.037998 kubelet[2500]: I0213 20:13:19.037967 2500 scope.go:117] "RemoveContainer" containerID="61693737953f35ea31ca59706cc6f3e8693b7b4bc0c8097a0055c1266c01d549" Feb 13 20:13:19.039194 containerd[2000]: time="2025-02-13T20:13:19.039151692Z" level=info msg="RemoveContainer for \"61693737953f35ea31ca59706cc6f3e8693b7b4bc0c8097a0055c1266c01d549\"" Feb 13 20:13:19.042607 containerd[2000]: time="2025-02-13T20:13:19.042514989Z" level=info msg="RemoveContainer for \"61693737953f35ea31ca59706cc6f3e8693b7b4bc0c8097a0055c1266c01d549\" returns successfully" Feb 13 20:13:19.043033 kubelet[2500]: I0213 20:13:19.042987 2500 scope.go:117] "RemoveContainer" containerID="f3d125c5c91cda9d100c9db5075fedae01e2dddb341b33c542b5a45ce9d7e03d" Feb 13 20:13:19.045529 containerd[2000]: time="2025-02-13T20:13:19.045491746Z" level=info msg="RemoveContainer for \"f3d125c5c91cda9d100c9db5075fedae01e2dddb341b33c542b5a45ce9d7e03d\"" Feb 13 20:13:19.049102 containerd[2000]: time="2025-02-13T20:13:19.049020652Z" level=info msg="RemoveContainer for \"f3d125c5c91cda9d100c9db5075fedae01e2dddb341b33c542b5a45ce9d7e03d\" returns successfully" Feb 13 20:13:19.049516 kubelet[2500]: I0213 20:13:19.049373 2500 scope.go:117] "RemoveContainer" containerID="43c4e431e4165032c0a3f5a28359f1bbf9d856beeca87569af776384f08d07c8" Feb 13 20:13:19.057809 containerd[2000]: time="2025-02-13T20:13:19.057736163Z" level=error msg="ContainerStatus for \"43c4e431e4165032c0a3f5a28359f1bbf9d856beeca87569af776384f08d07c8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"43c4e431e4165032c0a3f5a28359f1bbf9d856beeca87569af776384f08d07c8\": not found" Feb 13 20:13:19.090539 kubelet[2500]: E0213 20:13:19.090331 2500 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"43c4e431e4165032c0a3f5a28359f1bbf9d856beeca87569af776384f08d07c8\": not found" containerID="43c4e431e4165032c0a3f5a28359f1bbf9d856beeca87569af776384f08d07c8" Feb 13 20:13:19.090728 kubelet[2500]: I0213 20:13:19.090545 2500 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"43c4e431e4165032c0a3f5a28359f1bbf9d856beeca87569af776384f08d07c8"} err="failed to get container status \"43c4e431e4165032c0a3f5a28359f1bbf9d856beeca87569af776384f08d07c8\": rpc error: code = NotFound desc = an error occurred when try to find container \"43c4e431e4165032c0a3f5a28359f1bbf9d856beeca87569af776384f08d07c8\": not found" Feb 13 20:13:19.090728 kubelet[2500]: I0213 20:13:19.090670 2500 scope.go:117] "RemoveContainer" containerID="b367a41ffabcb802e9698e533e25268cb97438c7123ade9cad2f3f32bc5a29e3" Feb 13 20:13:19.092256 containerd[2000]: time="2025-02-13T20:13:19.091923156Z" level=error msg="ContainerStatus for \"b367a41ffabcb802e9698e533e25268cb97438c7123ade9cad2f3f32bc5a29e3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b367a41ffabcb802e9698e533e25268cb97438c7123ade9cad2f3f32bc5a29e3\": not found" Feb 13 20:13:19.092606 kubelet[2500]: E0213 20:13:19.092536 2500 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b367a41ffabcb802e9698e533e25268cb97438c7123ade9cad2f3f32bc5a29e3\": not found" containerID="b367a41ffabcb802e9698e533e25268cb97438c7123ade9cad2f3f32bc5a29e3" Feb 13 20:13:19.092709 kubelet[2500]: I0213 20:13:19.092621 2500 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b367a41ffabcb802e9698e533e25268cb97438c7123ade9cad2f3f32bc5a29e3"} err="failed to get container status \"b367a41ffabcb802e9698e533e25268cb97438c7123ade9cad2f3f32bc5a29e3\": rpc error: code = NotFound desc = an error occurred when try to find container \"b367a41ffabcb802e9698e533e25268cb97438c7123ade9cad2f3f32bc5a29e3\": not found" Feb 13 20:13:19.092709 kubelet[2500]: I0213 20:13:19.092649 2500 scope.go:117] "RemoveContainer" containerID="0fded88045a90c2d1cc91e1bc05531e6297117486cfcd1ab808309ba0a18bed7" Feb 13 20:13:19.093819 containerd[2000]: time="2025-02-13T20:13:19.093605158Z" level=error msg="ContainerStatus for \"0fded88045a90c2d1cc91e1bc05531e6297117486cfcd1ab808309ba0a18bed7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0fded88045a90c2d1cc91e1bc05531e6297117486cfcd1ab808309ba0a18bed7\": not found" Feb 13 20:13:19.094068 kubelet[2500]: E0213 20:13:19.093973 2500 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0fded88045a90c2d1cc91e1bc05531e6297117486cfcd1ab808309ba0a18bed7\": not found" containerID="0fded88045a90c2d1cc91e1bc05531e6297117486cfcd1ab808309ba0a18bed7" Feb 13 20:13:19.094150 kubelet[2500]: I0213 20:13:19.094079 2500 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0fded88045a90c2d1cc91e1bc05531e6297117486cfcd1ab808309ba0a18bed7"} err="failed to get container status \"0fded88045a90c2d1cc91e1bc05531e6297117486cfcd1ab808309ba0a18bed7\": rpc error: code = NotFound desc = an error occurred when try to find container \"0fded88045a90c2d1cc91e1bc05531e6297117486cfcd1ab808309ba0a18bed7\": not found" Feb 13 20:13:19.094150 kubelet[2500]: I0213 20:13:19.094105 2500 scope.go:117] "RemoveContainer" containerID="61693737953f35ea31ca59706cc6f3e8693b7b4bc0c8097a0055c1266c01d549" Feb 13 20:13:19.094840 containerd[2000]: time="2025-02-13T20:13:19.094794273Z" level=error msg="ContainerStatus for \"61693737953f35ea31ca59706cc6f3e8693b7b4bc0c8097a0055c1266c01d549\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"61693737953f35ea31ca59706cc6f3e8693b7b4bc0c8097a0055c1266c01d549\": not found" Feb 13 20:13:19.096635 kubelet[2500]: E0213 20:13:19.096595 2500 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"61693737953f35ea31ca59706cc6f3e8693b7b4bc0c8097a0055c1266c01d549\": not found" containerID="61693737953f35ea31ca59706cc6f3e8693b7b4bc0c8097a0055c1266c01d549" Feb 13 20:13:19.096803 kubelet[2500]: I0213 20:13:19.096640 2500 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"61693737953f35ea31ca59706cc6f3e8693b7b4bc0c8097a0055c1266c01d549"} err="failed to get container status \"61693737953f35ea31ca59706cc6f3e8693b7b4bc0c8097a0055c1266c01d549\": rpc error: code = NotFound desc = an error occurred when try to find container \"61693737953f35ea31ca59706cc6f3e8693b7b4bc0c8097a0055c1266c01d549\": not found" Feb 13 20:13:19.096803 kubelet[2500]: I0213 20:13:19.096711 2500 scope.go:117] "RemoveContainer" containerID="f3d125c5c91cda9d100c9db5075fedae01e2dddb341b33c542b5a45ce9d7e03d" Feb 13 20:13:19.098172 containerd[2000]: time="2025-02-13T20:13:19.097982073Z" level=error msg="ContainerStatus for \"f3d125c5c91cda9d100c9db5075fedae01e2dddb341b33c542b5a45ce9d7e03d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f3d125c5c91cda9d100c9db5075fedae01e2dddb341b33c542b5a45ce9d7e03d\": not found" Feb 13 20:13:19.098457 kubelet[2500]: E0213 20:13:19.098415 2500 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f3d125c5c91cda9d100c9db5075fedae01e2dddb341b33c542b5a45ce9d7e03d\": not found" containerID="f3d125c5c91cda9d100c9db5075fedae01e2dddb341b33c542b5a45ce9d7e03d" Feb 13 20:13:19.098567 kubelet[2500]: I0213 20:13:19.098454 2500 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f3d125c5c91cda9d100c9db5075fedae01e2dddb341b33c542b5a45ce9d7e03d"} err="failed to get container status \"f3d125c5c91cda9d100c9db5075fedae01e2dddb341b33c542b5a45ce9d7e03d\": rpc error: code = NotFound desc = an error occurred when try to find container \"f3d125c5c91cda9d100c9db5075fedae01e2dddb341b33c542b5a45ce9d7e03d\": not found" Feb 13 20:13:19.481727 kubelet[2500]: E0213 20:13:19.481675 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:19.584247 kubelet[2500]: I0213 20:13:19.584204 2500 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="719c03e5-a3e9-48d1-819e-8eff8acb5c54" path="/var/lib/kubelet/pods/719c03e5-a3e9-48d1-819e-8eff8acb5c54/volumes" Feb 13 20:13:20.156412 ntpd[1960]: Deleting interface #10 lxc_health, fe80::8c81:d6ff:fef4:f95a%7#123, interface stats: received=0, sent=0, dropped=0, active_time=42 secs Feb 13 20:13:20.156820 ntpd[1960]: 13 Feb 20:13:20 ntpd[1960]: Deleting interface #10 lxc_health, fe80::8c81:d6ff:fef4:f95a%7#123, interface stats: received=0, sent=0, dropped=0, active_time=42 secs Feb 13 20:13:20.482563 kubelet[2500]: E0213 20:13:20.482422 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:20.853295 kubelet[2500]: I0213 20:13:20.853248 2500 topology_manager.go:215] "Topology Admit Handler" podUID="8165f75b-7e15-476d-a584-6d05a63a52a4" podNamespace="kube-system" podName="cilium-operator-599987898-hz8kb" Feb 13 20:13:20.853479 kubelet[2500]: E0213 20:13:20.853320 2500 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="719c03e5-a3e9-48d1-819e-8eff8acb5c54" containerName="mount-bpf-fs" Feb 13 20:13:20.853479 kubelet[2500]: E0213 20:13:20.853334 2500 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="719c03e5-a3e9-48d1-819e-8eff8acb5c54" containerName="cilium-agent" Feb 13 20:13:20.853479 kubelet[2500]: E0213 20:13:20.853342 2500 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="719c03e5-a3e9-48d1-819e-8eff8acb5c54" containerName="clean-cilium-state" Feb 13 20:13:20.853479 kubelet[2500]: E0213 20:13:20.853350 2500 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="719c03e5-a3e9-48d1-819e-8eff8acb5c54" containerName="mount-cgroup" Feb 13 20:13:20.853479 kubelet[2500]: E0213 20:13:20.853393 2500 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="719c03e5-a3e9-48d1-819e-8eff8acb5c54" containerName="apply-sysctl-overwrites" Feb 13 20:13:20.853479 kubelet[2500]: I0213 20:13:20.853417 2500 memory_manager.go:354] "RemoveStaleState removing state" podUID="719c03e5-a3e9-48d1-819e-8eff8acb5c54" containerName="cilium-agent" Feb 13 20:13:20.865686 kubelet[2500]: W0213 20:13:20.865648 2500 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.31.17.230" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.31.17.230' and this object Feb 13 20:13:20.865686 kubelet[2500]: E0213 20:13:20.865689 2500 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.31.17.230" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.31.17.230' and this object Feb 13 20:13:20.908225 kubelet[2500]: I0213 20:13:20.908188 2500 topology_manager.go:215] "Topology Admit Handler" podUID="9601b015-db9a-45ea-b519-c48bdd229ed0" podNamespace="kube-system" podName="cilium-plpph" Feb 13 20:13:21.004341 kubelet[2500]: I0213 20:13:21.004273 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8165f75b-7e15-476d-a584-6d05a63a52a4-cilium-config-path\") pod \"cilium-operator-599987898-hz8kb\" (UID: \"8165f75b-7e15-476d-a584-6d05a63a52a4\") " pod="kube-system/cilium-operator-599987898-hz8kb" Feb 13 20:13:21.004341 kubelet[2500]: I0213 20:13:21.004326 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m9pq\" (UniqueName: \"kubernetes.io/projected/8165f75b-7e15-476d-a584-6d05a63a52a4-kube-api-access-2m9pq\") pod \"cilium-operator-599987898-hz8kb\" (UID: \"8165f75b-7e15-476d-a584-6d05a63a52a4\") " pod="kube-system/cilium-operator-599987898-hz8kb" Feb 13 20:13:21.105486 kubelet[2500]: I0213 20:13:21.105351 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9601b015-db9a-45ea-b519-c48bdd229ed0-cilium-cgroup\") pod \"cilium-plpph\" (UID: \"9601b015-db9a-45ea-b519-c48bdd229ed0\") " pod="kube-system/cilium-plpph" Feb 13 20:13:21.105629 kubelet[2500]: I0213 20:13:21.105519 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9601b015-db9a-45ea-b519-c48bdd229ed0-cilium-ipsec-secrets\") pod \"cilium-plpph\" (UID: \"9601b015-db9a-45ea-b519-c48bdd229ed0\") " pod="kube-system/cilium-plpph" Feb 13 20:13:21.105629 kubelet[2500]: I0213 20:13:21.105552 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx4bb\" (UniqueName: \"kubernetes.io/projected/9601b015-db9a-45ea-b519-c48bdd229ed0-kube-api-access-zx4bb\") pod \"cilium-plpph\" (UID: \"9601b015-db9a-45ea-b519-c48bdd229ed0\") " pod="kube-system/cilium-plpph" Feb 13 20:13:21.105629 kubelet[2500]: I0213 20:13:21.105596 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9601b015-db9a-45ea-b519-c48bdd229ed0-cilium-run\") pod \"cilium-plpph\" (UID: \"9601b015-db9a-45ea-b519-c48bdd229ed0\") " pod="kube-system/cilium-plpph" Feb 13 20:13:21.105629 kubelet[2500]: I0213 20:13:21.105619 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9601b015-db9a-45ea-b519-c48bdd229ed0-xtables-lock\") pod \"cilium-plpph\" (UID: \"9601b015-db9a-45ea-b519-c48bdd229ed0\") " pod="kube-system/cilium-plpph" Feb 13 20:13:21.105817 kubelet[2500]: I0213 20:13:21.105642 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9601b015-db9a-45ea-b519-c48bdd229ed0-host-proc-sys-net\") pod \"cilium-plpph\" (UID: \"9601b015-db9a-45ea-b519-c48bdd229ed0\") " pod="kube-system/cilium-plpph" Feb 13 20:13:21.105817 kubelet[2500]: I0213 20:13:21.105667 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9601b015-db9a-45ea-b519-c48bdd229ed0-cni-path\") pod \"cilium-plpph\" (UID: \"9601b015-db9a-45ea-b519-c48bdd229ed0\") " pod="kube-system/cilium-plpph" Feb 13 20:13:21.105817 kubelet[2500]: I0213 20:13:21.105692 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9601b015-db9a-45ea-b519-c48bdd229ed0-lib-modules\") pod \"cilium-plpph\" (UID: \"9601b015-db9a-45ea-b519-c48bdd229ed0\") " pod="kube-system/cilium-plpph" Feb 13 20:13:21.105817 kubelet[2500]: I0213 20:13:21.105717 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9601b015-db9a-45ea-b519-c48bdd229ed0-host-proc-sys-kernel\") pod \"cilium-plpph\" (UID: \"9601b015-db9a-45ea-b519-c48bdd229ed0\") " pod="kube-system/cilium-plpph" Feb 13 20:13:21.105817 kubelet[2500]: I0213 20:13:21.105742 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9601b015-db9a-45ea-b519-c48bdd229ed0-hubble-tls\") pod \"cilium-plpph\" (UID: \"9601b015-db9a-45ea-b519-c48bdd229ed0\") " pod="kube-system/cilium-plpph" Feb 13 20:13:21.105817 kubelet[2500]: I0213 20:13:21.105767 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9601b015-db9a-45ea-b519-c48bdd229ed0-bpf-maps\") pod \"cilium-plpph\" (UID: \"9601b015-db9a-45ea-b519-c48bdd229ed0\") " pod="kube-system/cilium-plpph" Feb 13 20:13:21.106041 kubelet[2500]: I0213 20:13:21.105791 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9601b015-db9a-45ea-b519-c48bdd229ed0-hostproc\") pod \"cilium-plpph\" (UID: \"9601b015-db9a-45ea-b519-c48bdd229ed0\") " pod="kube-system/cilium-plpph" Feb 13 20:13:21.106041 kubelet[2500]: I0213 20:13:21.105814 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9601b015-db9a-45ea-b519-c48bdd229ed0-etc-cni-netd\") pod \"cilium-plpph\" (UID: \"9601b015-db9a-45ea-b519-c48bdd229ed0\") " pod="kube-system/cilium-plpph" Feb 13 20:13:21.106041 kubelet[2500]: I0213 20:13:21.105840 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9601b015-db9a-45ea-b519-c48bdd229ed0-clustermesh-secrets\") pod \"cilium-plpph\" (UID: \"9601b015-db9a-45ea-b519-c48bdd229ed0\") " pod="kube-system/cilium-plpph" Feb 13 20:13:21.106041 kubelet[2500]: I0213 20:13:21.105864 2500 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9601b015-db9a-45ea-b519-c48bdd229ed0-cilium-config-path\") pod \"cilium-plpph\" (UID: \"9601b015-db9a-45ea-b519-c48bdd229ed0\") " pod="kube-system/cilium-plpph" Feb 13 20:13:21.483446 kubelet[2500]: E0213 20:13:21.483306 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:22.142813 containerd[2000]: time="2025-02-13T20:13:22.142763809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-plpph,Uid:9601b015-db9a-45ea-b519-c48bdd229ed0,Namespace:kube-system,Attempt:0,}" Feb 13 20:13:22.228085 containerd[2000]: time="2025-02-13T20:13:22.227703767Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:13:22.228085 containerd[2000]: time="2025-02-13T20:13:22.228025477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:13:22.228564 containerd[2000]: time="2025-02-13T20:13:22.228066885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:13:22.228564 containerd[2000]: time="2025-02-13T20:13:22.228249703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:13:22.288700 containerd[2000]: time="2025-02-13T20:13:22.288655119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-plpph,Uid:9601b015-db9a-45ea-b519-c48bdd229ed0,Namespace:kube-system,Attempt:0,} returns sandbox id \"196315eaa5b07040223788f5c494c3b84bfda46fb0284a48f2f32b663da9d727\"" Feb 13 20:13:22.294006 containerd[2000]: time="2025-02-13T20:13:22.293963531Z" level=info msg="CreateContainer within sandbox \"196315eaa5b07040223788f5c494c3b84bfda46fb0284a48f2f32b663da9d727\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 20:13:22.348818 containerd[2000]: time="2025-02-13T20:13:22.348741539Z" level=info msg="CreateContainer within sandbox \"196315eaa5b07040223788f5c494c3b84bfda46fb0284a48f2f32b663da9d727\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f1b074e792052ad76c90d587c23209a2558276043f7049fd7993bdda5c9a6d5c\"" Feb 13 20:13:22.350061 containerd[2000]: time="2025-02-13T20:13:22.349985918Z" level=info msg="StartContainer for \"f1b074e792052ad76c90d587c23209a2558276043f7049fd7993bdda5c9a6d5c\"" Feb 13 20:13:22.358683 containerd[2000]: time="2025-02-13T20:13:22.358360386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-hz8kb,Uid:8165f75b-7e15-476d-a584-6d05a63a52a4,Namespace:kube-system,Attempt:0,}" Feb 13 20:13:22.434013 containerd[2000]: time="2025-02-13T20:13:22.433809843Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:13:22.435031 containerd[2000]: time="2025-02-13T20:13:22.433867924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:13:22.435031 containerd[2000]: time="2025-02-13T20:13:22.434491960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:13:22.435031 containerd[2000]: time="2025-02-13T20:13:22.434638015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:13:22.468372 containerd[2000]: time="2025-02-13T20:13:22.468325042Z" level=info msg="StartContainer for \"f1b074e792052ad76c90d587c23209a2558276043f7049fd7993bdda5c9a6d5c\" returns successfully" Feb 13 20:13:22.484505 kubelet[2500]: E0213 20:13:22.484467 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:22.528335 containerd[2000]: time="2025-02-13T20:13:22.528282271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-hz8kb,Uid:8165f75b-7e15-476d-a584-6d05a63a52a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb98582e16cc15b0f9242805bf6a17d884092dda6d53b0f89aca60a41c80dde1\"" Feb 13 20:13:22.531021 containerd[2000]: time="2025-02-13T20:13:22.530982907Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 20:13:22.976622 containerd[2000]: time="2025-02-13T20:13:22.976546898Z" level=info msg="shim disconnected" id=f1b074e792052ad76c90d587c23209a2558276043f7049fd7993bdda5c9a6d5c namespace=k8s.io Feb 13 20:13:22.977657 containerd[2000]: time="2025-02-13T20:13:22.976727722Z" level=warning msg="cleaning up after shim disconnected" id=f1b074e792052ad76c90d587c23209a2558276043f7049fd7993bdda5c9a6d5c namespace=k8s.io Feb 13 20:13:22.977657 containerd[2000]: time="2025-02-13T20:13:22.977325531Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:13:23.485720 kubelet[2500]: E0213 20:13:23.485658 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:23.544646 kubelet[2500]: E0213 20:13:23.544604 2500 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:13:23.971394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1076117053.mount: Deactivated successfully. Feb 13 20:13:24.020798 containerd[2000]: time="2025-02-13T20:13:24.020483347Z" level=info msg="CreateContainer within sandbox \"196315eaa5b07040223788f5c494c3b84bfda46fb0284a48f2f32b663da9d727\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 20:13:24.116382 containerd[2000]: time="2025-02-13T20:13:24.116332184Z" level=info msg="CreateContainer within sandbox \"196315eaa5b07040223788f5c494c3b84bfda46fb0284a48f2f32b663da9d727\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d86599d87d1d8ced24703fe670c2dc623d9000922eaed6683c479f9c84805fd5\"" Feb 13 20:13:24.118420 containerd[2000]: time="2025-02-13T20:13:24.117865436Z" level=info msg="StartContainer for \"d86599d87d1d8ced24703fe670c2dc623d9000922eaed6683c479f9c84805fd5\"" Feb 13 20:13:24.216783 systemd[1]: run-containerd-runc-k8s.io-d86599d87d1d8ced24703fe670c2dc623d9000922eaed6683c479f9c84805fd5-runc.XCXZGX.mount: Deactivated successfully. Feb 13 20:13:24.270821 containerd[2000]: time="2025-02-13T20:13:24.270419725Z" level=info msg="StartContainer for \"d86599d87d1d8ced24703fe670c2dc623d9000922eaed6683c479f9c84805fd5\" returns successfully" Feb 13 20:13:24.327859 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d86599d87d1d8ced24703fe670c2dc623d9000922eaed6683c479f9c84805fd5-rootfs.mount: Deactivated successfully. Feb 13 20:13:24.352950 containerd[2000]: time="2025-02-13T20:13:24.352879579Z" level=info msg="shim disconnected" id=d86599d87d1d8ced24703fe670c2dc623d9000922eaed6683c479f9c84805fd5 namespace=k8s.io Feb 13 20:13:24.352950 containerd[2000]: time="2025-02-13T20:13:24.352947511Z" level=warning msg="cleaning up after shim disconnected" id=d86599d87d1d8ced24703fe670c2dc623d9000922eaed6683c479f9c84805fd5 namespace=k8s.io Feb 13 20:13:24.352950 containerd[2000]: time="2025-02-13T20:13:24.352968829Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:13:24.486164 kubelet[2500]: E0213 20:13:24.486126 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:24.871835 containerd[2000]: time="2025-02-13T20:13:24.871720051Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:13:24.872775 containerd[2000]: time="2025-02-13T20:13:24.872725265Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 13 20:13:24.874760 containerd[2000]: time="2025-02-13T20:13:24.873668111Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:13:24.875284 containerd[2000]: time="2025-02-13T20:13:24.875247535Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.344221159s" Feb 13 20:13:24.875371 containerd[2000]: time="2025-02-13T20:13:24.875292866Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 20:13:24.883822 containerd[2000]: time="2025-02-13T20:13:24.883716798Z" level=info msg="CreateContainer within sandbox \"bb98582e16cc15b0f9242805bf6a17d884092dda6d53b0f89aca60a41c80dde1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 20:13:24.898370 containerd[2000]: time="2025-02-13T20:13:24.898281687Z" level=info msg="CreateContainer within sandbox \"bb98582e16cc15b0f9242805bf6a17d884092dda6d53b0f89aca60a41c80dde1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4566e63f03c36cb299dc4fe00caa3414d600ea2a58500cf9030c8bcb0f3830b0\"" Feb 13 20:13:24.899633 containerd[2000]: time="2025-02-13T20:13:24.899590759Z" level=info msg="StartContainer for \"4566e63f03c36cb299dc4fe00caa3414d600ea2a58500cf9030c8bcb0f3830b0\"" Feb 13 20:13:24.963110 containerd[2000]: time="2025-02-13T20:13:24.963064618Z" level=info msg="StartContainer for \"4566e63f03c36cb299dc4fe00caa3414d600ea2a58500cf9030c8bcb0f3830b0\" returns successfully" Feb 13 20:13:25.026245 containerd[2000]: time="2025-02-13T20:13:25.026199884Z" level=info msg="CreateContainer within sandbox \"196315eaa5b07040223788f5c494c3b84bfda46fb0284a48f2f32b663da9d727\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 20:13:25.064087 containerd[2000]: time="2025-02-13T20:13:25.059319656Z" level=info msg="CreateContainer within sandbox \"196315eaa5b07040223788f5c494c3b84bfda46fb0284a48f2f32b663da9d727\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9035c00284c374e90e5c3ea2a7347195c7aac3170474ecfe3c8e74bbdf2ca0b2\"" Feb 13 20:13:25.071095 containerd[2000]: time="2025-02-13T20:13:25.069659023Z" level=info msg="StartContainer for \"9035c00284c374e90e5c3ea2a7347195c7aac3170474ecfe3c8e74bbdf2ca0b2\"" Feb 13 20:13:25.106943 kubelet[2500]: I0213 20:13:25.106882 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-hz8kb" podStartSLOduration=2.760783828 podStartE2EDuration="5.106866128s" podCreationTimestamp="2025-02-13 20:13:20 +0000 UTC" firstStartedPulling="2025-02-13 20:13:22.53032399 +0000 UTC m=+69.673651375" lastFinishedPulling="2025-02-13 20:13:24.876406284 +0000 UTC m=+72.019733675" observedRunningTime="2025-02-13 20:13:25.043679869 +0000 UTC m=+72.187007276" watchObservedRunningTime="2025-02-13 20:13:25.106866128 +0000 UTC m=+72.250193533" Feb 13 20:13:25.203834 containerd[2000]: time="2025-02-13T20:13:25.203651745Z" level=info msg="StartContainer for \"9035c00284c374e90e5c3ea2a7347195c7aac3170474ecfe3c8e74bbdf2ca0b2\" returns successfully" Feb 13 20:13:25.339405 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9035c00284c374e90e5c3ea2a7347195c7aac3170474ecfe3c8e74bbdf2ca0b2-rootfs.mount: Deactivated successfully. Feb 13 20:13:25.467115 containerd[2000]: time="2025-02-13T20:13:25.466540348Z" level=info msg="shim disconnected" id=9035c00284c374e90e5c3ea2a7347195c7aac3170474ecfe3c8e74bbdf2ca0b2 namespace=k8s.io Feb 13 20:13:25.467115 containerd[2000]: time="2025-02-13T20:13:25.466616668Z" level=warning msg="cleaning up after shim disconnected" id=9035c00284c374e90e5c3ea2a7347195c7aac3170474ecfe3c8e74bbdf2ca0b2 namespace=k8s.io Feb 13 20:13:25.467115 containerd[2000]: time="2025-02-13T20:13:25.466639008Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:13:25.467912 kubelet[2500]: I0213 20:13:25.467441 2500 setters.go:580] "Node became not ready" node="172.31.17.230" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T20:13:25Z","lastTransitionTime":"2025-02-13T20:13:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 20:13:25.487227 kubelet[2500]: E0213 20:13:25.487186 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:26.029686 containerd[2000]: time="2025-02-13T20:13:26.029639350Z" level=info msg="CreateContainer within sandbox \"196315eaa5b07040223788f5c494c3b84bfda46fb0284a48f2f32b663da9d727\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 20:13:26.051482 containerd[2000]: time="2025-02-13T20:13:26.051439581Z" level=info msg="CreateContainer within sandbox \"196315eaa5b07040223788f5c494c3b84bfda46fb0284a48f2f32b663da9d727\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"282ad259f15bacbf5b25da63535f896637247aa3c5fa2e380a71e1ce23ceee00\"" Feb 13 20:13:26.052731 containerd[2000]: time="2025-02-13T20:13:26.052689642Z" level=info msg="StartContainer for \"282ad259f15bacbf5b25da63535f896637247aa3c5fa2e380a71e1ce23ceee00\"" Feb 13 20:13:26.147606 containerd[2000]: time="2025-02-13T20:13:26.143954917Z" level=info msg="StartContainer for \"282ad259f15bacbf5b25da63535f896637247aa3c5fa2e380a71e1ce23ceee00\" returns successfully" Feb 13 20:13:26.177371 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-282ad259f15bacbf5b25da63535f896637247aa3c5fa2e380a71e1ce23ceee00-rootfs.mount: Deactivated successfully. Feb 13 20:13:26.181552 containerd[2000]: time="2025-02-13T20:13:26.181399104Z" level=info msg="shim disconnected" id=282ad259f15bacbf5b25da63535f896637247aa3c5fa2e380a71e1ce23ceee00 namespace=k8s.io Feb 13 20:13:26.181767 containerd[2000]: time="2025-02-13T20:13:26.181553888Z" level=warning msg="cleaning up after shim disconnected" id=282ad259f15bacbf5b25da63535f896637247aa3c5fa2e380a71e1ce23ceee00 namespace=k8s.io Feb 13 20:13:26.181767 containerd[2000]: time="2025-02-13T20:13:26.181567100Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:13:26.488063 kubelet[2500]: E0213 20:13:26.487995 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:27.036691 containerd[2000]: time="2025-02-13T20:13:27.036510214Z" level=info msg="CreateContainer within sandbox \"196315eaa5b07040223788f5c494c3b84bfda46fb0284a48f2f32b663da9d727\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 20:13:27.070614 containerd[2000]: time="2025-02-13T20:13:27.070565375Z" level=info msg="CreateContainer within sandbox \"196315eaa5b07040223788f5c494c3b84bfda46fb0284a48f2f32b663da9d727\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cf51ffb857f9b8425470d17f9a5fd2c3eca1972f8daaa695500a4da924264d41\"" Feb 13 20:13:27.071416 containerd[2000]: time="2025-02-13T20:13:27.071384202Z" level=info msg="StartContainer for \"cf51ffb857f9b8425470d17f9a5fd2c3eca1972f8daaa695500a4da924264d41\"" Feb 13 20:13:27.151523 containerd[2000]: time="2025-02-13T20:13:27.151371355Z" level=info msg="StartContainer for \"cf51ffb857f9b8425470d17f9a5fd2c3eca1972f8daaa695500a4da924264d41\" returns successfully" Feb 13 20:13:27.190871 systemd[1]: run-containerd-runc-k8s.io-cf51ffb857f9b8425470d17f9a5fd2c3eca1972f8daaa695500a4da924264d41-runc.IasvsR.mount: Deactivated successfully. Feb 13 20:13:27.489172 kubelet[2500]: E0213 20:13:27.489127 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:28.493499 kubelet[2500]: E0213 20:13:28.493443 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:29.494477 kubelet[2500]: E0213 20:13:29.494430 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:29.827215 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 20:13:30.496302 kubelet[2500]: E0213 20:13:30.496220 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:31.497366 kubelet[2500]: E0213 20:13:31.497307 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:32.497720 kubelet[2500]: E0213 20:13:32.497616 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:33.417949 kubelet[2500]: E0213 20:13:33.417912 2500 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:33.498187 kubelet[2500]: E0213 20:13:33.498148 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:34.305234 systemd-networkd[1566]: lxc_health: Link UP Feb 13 20:13:34.312402 systemd-networkd[1566]: lxc_health: Gained carrier Feb 13 20:13:34.324463 (udev-worker)[5309]: Network interface NamePolicy= disabled on kernel command line. Feb 13 20:13:34.498404 kubelet[2500]: E0213 20:13:34.498360 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:35.499537 kubelet[2500]: E0213 20:13:35.499488 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:35.550700 systemd-networkd[1566]: lxc_health: Gained IPv6LL Feb 13 20:13:36.208349 kubelet[2500]: I0213 20:13:36.204417 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-plpph" podStartSLOduration=16.20439601 podStartE2EDuration="16.20439601s" podCreationTimestamp="2025-02-13 20:13:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:13:28.094340713 +0000 UTC m=+75.237668118" watchObservedRunningTime="2025-02-13 20:13:36.20439601 +0000 UTC m=+83.347723415" Feb 13 20:13:36.500344 kubelet[2500]: E0213 20:13:36.500213 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:37.502106 kubelet[2500]: E0213 20:13:37.502015 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:38.158628 ntpd[1960]: Listen normally on 14 lxc_health [fe80::6033:96ff:fe16:37f0%15]:123 Feb 13 20:13:38.159340 ntpd[1960]: 13 Feb 20:13:38 ntpd[1960]: Listen normally on 14 lxc_health [fe80::6033:96ff:fe16:37f0%15]:123 Feb 13 20:13:38.503304 kubelet[2500]: E0213 20:13:38.503144 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:39.503558 kubelet[2500]: E0213 20:13:39.503497 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:40.504975 kubelet[2500]: E0213 20:13:40.504912 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:41.505748 kubelet[2500]: E0213 20:13:41.505699 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:42.506207 kubelet[2500]: E0213 20:13:42.506148 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:43.507030 kubelet[2500]: E0213 20:13:43.506975 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:44.507853 kubelet[2500]: E0213 20:13:44.507815 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:45.508964 kubelet[2500]: E0213 20:13:45.508907 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:46.509971 kubelet[2500]: E0213 20:13:46.509915 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:47.510450 kubelet[2500]: E0213 20:13:47.510392 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:48.511339 kubelet[2500]: E0213 20:13:48.511249 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:49.512097 kubelet[2500]: E0213 20:13:49.512033 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:50.512826 kubelet[2500]: E0213 20:13:50.512768 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:51.513646 kubelet[2500]: E0213 20:13:51.513592 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:52.513999 kubelet[2500]: E0213 20:13:52.513944 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:53.417393 kubelet[2500]: E0213 20:13:53.417337 2500 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:53.514184 kubelet[2500]: E0213 20:13:53.514115 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:54.514666 kubelet[2500]: E0213 20:13:54.514580 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:54.905896 kubelet[2500]: E0213 20:13:54.905784 2500 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.207:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.230?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 20:13:55.285341 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4566e63f03c36cb299dc4fe00caa3414d600ea2a58500cf9030c8bcb0f3830b0-rootfs.mount: Deactivated successfully. Feb 13 20:13:55.291951 containerd[2000]: time="2025-02-13T20:13:55.291879176Z" level=info msg="shim disconnected" id=4566e63f03c36cb299dc4fe00caa3414d600ea2a58500cf9030c8bcb0f3830b0 namespace=k8s.io Feb 13 20:13:55.291951 containerd[2000]: time="2025-02-13T20:13:55.291949039Z" level=warning msg="cleaning up after shim disconnected" id=4566e63f03c36cb299dc4fe00caa3414d600ea2a58500cf9030c8bcb0f3830b0 namespace=k8s.io Feb 13 20:13:55.292849 containerd[2000]: time="2025-02-13T20:13:55.291961084Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:13:55.514821 kubelet[2500]: E0213 20:13:55.514776 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:56.034129 kubelet[2500]: E0213 20:13:56.034067 2500 kubelet_node_status.go:544] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-02-13T20:13:46Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-02-13T20:13:46Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-02-13T20:13:46Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-02-13T20:13:46Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"],\\\"sizeBytes\\\":166719855},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":91036984},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":73054371},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\\\",\\\"registry.k8s.io/kube-proxy:v1.30.10\\\"],\\\"sizeBytes\\\":29056877},{\\\"names\\\":[\\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"],\\\"sizeBytes\\\":18897442},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":311286}]}}\" for node \"172.31.17.230\": Patch \"https://172.31.24.207:6443/api/v1/nodes/172.31.17.230/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 20:13:56.137012 kubelet[2500]: I0213 20:13:56.136924 2500 scope.go:117] "RemoveContainer" containerID="4566e63f03c36cb299dc4fe00caa3414d600ea2a58500cf9030c8bcb0f3830b0" Feb 13 20:13:56.144240 containerd[2000]: time="2025-02-13T20:13:56.144189097Z" level=info msg="CreateContainer within sandbox \"bb98582e16cc15b0f9242805bf6a17d884092dda6d53b0f89aca60a41c80dde1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:1,}" Feb 13 20:13:56.179353 containerd[2000]: time="2025-02-13T20:13:56.179289760Z" level=info msg="CreateContainer within sandbox \"bb98582e16cc15b0f9242805bf6a17d884092dda6d53b0f89aca60a41c80dde1\" for &ContainerMetadata{Name:cilium-operator,Attempt:1,} returns container id \"d0467169db581baf3b9fc06e6901a9d52824fa981f2fa5c3819193d8477b223a\"" Feb 13 20:13:56.180429 containerd[2000]: time="2025-02-13T20:13:56.180385607Z" level=info msg="StartContainer for \"d0467169db581baf3b9fc06e6901a9d52824fa981f2fa5c3819193d8477b223a\"" Feb 13 20:13:56.257839 containerd[2000]: time="2025-02-13T20:13:56.257797050Z" level=info msg="StartContainer for \"d0467169db581baf3b9fc06e6901a9d52824fa981f2fa5c3819193d8477b223a\" returns successfully" Feb 13 20:13:56.515862 kubelet[2500]: E0213 20:13:56.515806 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:57.516202 kubelet[2500]: E0213 20:13:57.516145 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:58.516963 kubelet[2500]: E0213 20:13:58.516907 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:13:59.517578 kubelet[2500]: E0213 20:13:59.517519 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:00.517968 kubelet[2500]: E0213 20:14:00.517916 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:01.518737 kubelet[2500]: E0213 20:14:01.518686 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:02.520161 kubelet[2500]: E0213 20:14:02.519952 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:03.520891 kubelet[2500]: E0213 20:14:03.520834 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:04.521078 kubelet[2500]: E0213 20:14:04.521021 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:04.906790 kubelet[2500]: E0213 20:14:04.906740 2500 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io 172.31.17.230)" Feb 13 20:14:05.522225 kubelet[2500]: E0213 20:14:05.522167 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:06.034912 kubelet[2500]: E0213 20:14:06.034776 2500 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.17.230\": Get \"https://172.31.24.207:6443/api/v1/nodes/172.31.17.230?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 20:14:06.523187 kubelet[2500]: E0213 20:14:06.523131 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:07.523630 kubelet[2500]: E0213 20:14:07.523563 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:08.524469 kubelet[2500]: E0213 20:14:08.524425 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:09.525021 kubelet[2500]: E0213 20:14:09.524977 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:10.525639 kubelet[2500]: E0213 20:14:10.525598 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:11.526270 kubelet[2500]: E0213 20:14:11.526219 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:12.527046 kubelet[2500]: E0213 20:14:12.526989 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:13.417266 kubelet[2500]: E0213 20:14:13.417221 2500 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:13.470882 containerd[2000]: time="2025-02-13T20:14:13.470841179Z" level=info msg="StopPodSandbox for \"292e9d75cca52811b8290104445103db01a0a055923bc48725a37ce4e912f64b\"" Feb 13 20:14:13.471420 containerd[2000]: time="2025-02-13T20:14:13.470947762Z" level=info msg="TearDown network for sandbox \"292e9d75cca52811b8290104445103db01a0a055923bc48725a37ce4e912f64b\" successfully" Feb 13 20:14:13.471420 containerd[2000]: time="2025-02-13T20:14:13.470964011Z" level=info msg="StopPodSandbox for \"292e9d75cca52811b8290104445103db01a0a055923bc48725a37ce4e912f64b\" returns successfully" Feb 13 20:14:13.471526 containerd[2000]: time="2025-02-13T20:14:13.471429635Z" level=info msg="RemovePodSandbox for \"292e9d75cca52811b8290104445103db01a0a055923bc48725a37ce4e912f64b\"" Feb 13 20:14:13.471526 containerd[2000]: time="2025-02-13T20:14:13.471458437Z" level=info msg="Forcibly stopping sandbox \"292e9d75cca52811b8290104445103db01a0a055923bc48725a37ce4e912f64b\"" Feb 13 20:14:13.471604 containerd[2000]: time="2025-02-13T20:14:13.471525197Z" level=info msg="TearDown network for sandbox \"292e9d75cca52811b8290104445103db01a0a055923bc48725a37ce4e912f64b\" successfully" Feb 13 20:14:13.476765 containerd[2000]: time="2025-02-13T20:14:13.476707454Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"292e9d75cca52811b8290104445103db01a0a055923bc48725a37ce4e912f64b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:14:13.476971 containerd[2000]: time="2025-02-13T20:14:13.476782347Z" level=info msg="RemovePodSandbox \"292e9d75cca52811b8290104445103db01a0a055923bc48725a37ce4e912f64b\" returns successfully" Feb 13 20:14:13.527296 kubelet[2500]: E0213 20:14:13.527235 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:14.528075 kubelet[2500]: E0213 20:14:14.528005 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:14.907089 kubelet[2500]: E0213 20:14:14.906980 2500 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.207:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.230?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 20:14:15.529148 kubelet[2500]: E0213 20:14:15.529092 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:16.036013 kubelet[2500]: E0213 20:14:16.035879 2500 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.17.230\": Get \"https://172.31.24.207:6443/api/v1/nodes/172.31.17.230?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 20:14:16.529943 kubelet[2500]: E0213 20:14:16.529899 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:17.530375 kubelet[2500]: E0213 20:14:17.530314 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:18.531020 kubelet[2500]: E0213 20:14:18.530966 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:19.532075 kubelet[2500]: E0213 20:14:19.532017 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:20.532303 kubelet[2500]: E0213 20:14:20.532254 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:21.533466 kubelet[2500]: E0213 20:14:21.533423 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:22.533860 kubelet[2500]: E0213 20:14:22.533816 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:23.534042 kubelet[2500]: E0213 20:14:23.533975 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:24.534255 kubelet[2500]: E0213 20:14:24.534199 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:24.907591 kubelet[2500]: E0213 20:14:24.907525 2500 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.207:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.230?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 20:14:25.535184 kubelet[2500]: E0213 20:14:25.535139 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:26.036421 kubelet[2500]: E0213 20:14:26.036324 2500 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.17.230\": Get \"https://172.31.24.207:6443/api/v1/nodes/172.31.17.230?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 20:14:26.535609 kubelet[2500]: E0213 20:14:26.535550 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:27.536215 kubelet[2500]: E0213 20:14:27.536150 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:28.246258 kubelet[2500]: E0213 20:14:28.246074 2500 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.207:6443/api/v1/namespaces/kube-system/events\": unexpected EOF" event="&Event{ObjectMeta:{cilium-operator-599987898-hz8kb.1823ddb1a4c3034e kube-system 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:cilium-operator-599987898-hz8kb,UID:8165f75b-7e15-476d-a584-6d05a63a52a4,APIVersion:v1,ResourceVersion:895,FieldPath:spec.containers{cilium-operator},},Reason:Pulled,Message:Container image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" already present on machine,Source:EventSource{Component:kubelet,Host:172.31.17.230,},FirstTimestamp:2025-02-13 20:13:56.138337102 +0000 UTC m=+103.281664511,LastTimestamp:2025-02-13 20:13:56.138337102 +0000 UTC m=+103.281664511,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.17.230,}" Feb 13 20:14:28.248842 kubelet[2500]: E0213 20:14:28.246449 2500 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.207:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.230?timeout=10s\": unexpected EOF" Feb 13 20:14:28.248842 kubelet[2500]: I0213 20:14:28.246478 2500 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 13 20:14:28.537277 kubelet[2500]: E0213 20:14:28.537140 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:29.247067 kubelet[2500]: E0213 20:14:29.247011 2500 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.17.230\": Get \"https://172.31.24.207:6443/api/v1/nodes/172.31.17.230?timeout=10s\": dial tcp 172.31.24.207:6443: connect: connection refused - error from a previous attempt: unexpected EOF" Feb 13 20:14:29.247067 kubelet[2500]: E0213 20:14:29.247039 2500 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count" Feb 13 20:14:29.251771 kubelet[2500]: I0213 20:14:29.251677 2500 status_manager.go:853] "Failed to get status for pod" podUID="8165f75b-7e15-476d-a584-6d05a63a52a4" pod="kube-system/cilium-operator-599987898-hz8kb" err="Get \"https://172.31.24.207:6443/api/v1/namespaces/kube-system/pods/cilium-operator-599987898-hz8kb\": dial tcp 172.31.24.207:6443: connect: connection refused - error from a previous attempt: unexpected EOF" Feb 13 20:14:29.252295 kubelet[2500]: I0213 20:14:29.252255 2500 status_manager.go:853] "Failed to get status for pod" podUID="8165f75b-7e15-476d-a584-6d05a63a52a4" pod="kube-system/cilium-operator-599987898-hz8kb" err="Get \"https://172.31.24.207:6443/api/v1/namespaces/kube-system/pods/cilium-operator-599987898-hz8kb\": dial tcp 172.31.24.207:6443: connect: connection refused" Feb 13 20:14:29.253232 kubelet[2500]: I0213 20:14:29.253194 2500 status_manager.go:853] "Failed to get status for pod" podUID="8165f75b-7e15-476d-a584-6d05a63a52a4" pod="kube-system/cilium-operator-599987898-hz8kb" err="Get \"https://172.31.24.207:6443/api/v1/namespaces/kube-system/pods/cilium-operator-599987898-hz8kb\": dial tcp 172.31.24.207:6443: connect: connection refused" Feb 13 20:14:29.262420 kubelet[2500]: E0213 20:14:29.262357 2500 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.207:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.230?timeout=10s\": dial tcp 172.31.24.207:6443: connect: connection refused - error from a previous attempt: read tcp 172.31.17.230:40588->172.31.24.207:6443: read: connection reset by peer" interval="200ms" Feb 13 20:14:29.463931 kubelet[2500]: E0213 20:14:29.463872 2500 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.207:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.230?timeout=10s\": dial tcp 172.31.24.207:6443: connect: connection refused" interval="400ms" Feb 13 20:14:29.537652 kubelet[2500]: E0213 20:14:29.537520 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:29.865316 kubelet[2500]: E0213 20:14:29.865266 2500 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.207:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.230?timeout=10s\": dial tcp 172.31.24.207:6443: connect: connection refused" interval="800ms" Feb 13 20:14:30.538156 kubelet[2500]: E0213 20:14:30.538099 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:30.645410 kubelet[2500]: E0213 20:14:30.645025 2500 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.207:6443/api/v1/namespaces/kube-system/events\": dial tcp 172.31.24.207:6443: connect: connection refused" event="&Event{ObjectMeta:{cilium-operator-599987898-hz8kb.1823ddb1a4c3034e kube-system 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:cilium-operator-599987898-hz8kb,UID:8165f75b-7e15-476d-a584-6d05a63a52a4,APIVersion:v1,ResourceVersion:895,FieldPath:spec.containers{cilium-operator},},Reason:Pulled,Message:Container image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" already present on machine,Source:EventSource{Component:kubelet,Host:172.31.17.230,},FirstTimestamp:2025-02-13 20:13:56.138337102 +0000 UTC m=+103.281664511,LastTimestamp:2025-02-13 20:13:56.138337102 +0000 UTC m=+103.281664511,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.17.230,}" Feb 13 20:14:30.666816 kubelet[2500]: E0213 20:14:30.666758 2500 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.207:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.230?timeout=10s\": dial tcp 172.31.24.207:6443: connect: connection refused" interval="1.6s" Feb 13 20:14:31.539092 kubelet[2500]: E0213 20:14:31.539033 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:31.668682 kubelet[2500]: E0213 20:14:31.668611 2500 desired_state_of_world_populator.go:318] "Error processing volume" err="error processing PVC default/test-dynamic-volume-claim: failed to fetch PVC from API server: Get \"https://172.31.24.207:6443/api/v1/namespaces/default/persistentvolumeclaims/test-dynamic-volume-claim\": dial tcp 172.31.24.207:6443: connect: connection refused" pod="default/test-pod-1" volumeName="config" Feb 13 20:14:32.539731 kubelet[2500]: E0213 20:14:32.539679 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:33.417193 kubelet[2500]: E0213 20:14:33.417143 2500 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:33.540197 kubelet[2500]: E0213 20:14:33.540146 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:34.540611 kubelet[2500]: E0213 20:14:34.540559 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:35.541699 kubelet[2500]: E0213 20:14:35.541638 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:36.542705 kubelet[2500]: E0213 20:14:36.542651 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:37.543575 kubelet[2500]: E0213 20:14:37.543532 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:38.544553 kubelet[2500]: E0213 20:14:38.544499 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:39.545514 kubelet[2500]: E0213 20:14:39.545473 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:40.546816 kubelet[2500]: E0213 20:14:40.546262 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:41.547800 kubelet[2500]: E0213 20:14:41.547749 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:42.268188 kubelet[2500]: E0213 20:14:42.268133 2500 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.207:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.230?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Feb 13 20:14:42.549011 kubelet[2500]: E0213 20:14:42.548774 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:43.549071 kubelet[2500]: E0213 20:14:43.549022 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:44.549215 kubelet[2500]: E0213 20:14:44.549160 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:45.550008 kubelet[2500]: E0213 20:14:45.549972 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:14:46.550313 kubelet[2500]: E0213 20:14:46.550267 2500 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"