Jan 29 11:34:50.166545 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:36:13 -00 2025 Jan 29 11:34:50.166583 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:34:50.168023 kernel: BIOS-provided physical RAM map: Jan 29 11:34:50.168037 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 11:34:50.168049 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 11:34:50.168062 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 11:34:50.168083 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jan 29 11:34:50.168096 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jan 29 11:34:50.168109 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jan 29 11:34:50.168121 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 11:34:50.168134 kernel: NX (Execute Disable) protection: active Jan 29 11:34:50.168146 kernel: APIC: Static calls initialized Jan 29 11:34:50.168159 kernel: SMBIOS 2.7 present. Jan 29 11:34:50.168173 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 29 11:34:50.168192 kernel: Hypervisor detected: KVM Jan 29 11:34:50.168207 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 11:34:50.168222 kernel: kvm-clock: using sched offset of 8723235860 cycles Jan 29 11:34:50.168237 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 11:34:50.168346 kernel: tsc: Detected 2499.994 MHz processor Jan 29 11:34:50.168364 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 11:34:50.168380 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 11:34:50.168398 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jan 29 11:34:50.168414 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 11:34:50.168428 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 11:34:50.168459 kernel: Using GB pages for direct mapping Jan 29 11:34:50.168470 kernel: ACPI: Early table checksum verification disabled Jan 29 11:34:50.168481 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jan 29 11:34:50.168493 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jan 29 11:34:50.168504 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 29 11:34:50.168514 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 29 11:34:50.168529 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jan 29 11:34:50.168541 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 29 11:34:50.168552 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 29 11:34:50.168563 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 29 11:34:50.168575 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 29 11:34:50.168586 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 29 11:34:50.168597 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 29 11:34:50.168608 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 29 11:34:50.168620 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jan 29 11:34:50.168635 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jan 29 11:34:50.168651 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jan 29 11:34:50.168665 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jan 29 11:34:50.168677 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jan 29 11:34:50.168690 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jan 29 11:34:50.168708 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jan 29 11:34:50.168721 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jan 29 11:34:50.168734 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jan 29 11:34:50.168747 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jan 29 11:34:50.168759 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 29 11:34:50.168773 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 29 11:34:50.168787 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 29 11:34:50.168801 kernel: NUMA: Initialized distance table, cnt=1 Jan 29 11:34:50.168889 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jan 29 11:34:50.168931 kernel: Zone ranges: Jan 29 11:34:50.168948 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 11:34:50.168960 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jan 29 11:34:50.168974 kernel: Normal empty Jan 29 11:34:50.168988 kernel: Movable zone start for each node Jan 29 11:34:50.169001 kernel: Early memory node ranges Jan 29 11:34:50.169015 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 11:34:50.169029 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jan 29 11:34:50.169042 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jan 29 11:34:50.169056 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:34:50.169074 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 11:34:50.169087 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jan 29 11:34:50.169101 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 29 11:34:50.169115 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 11:34:50.169128 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 29 11:34:50.169142 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 11:34:50.169156 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 11:34:50.169170 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 11:34:50.169184 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 11:34:50.169202 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 11:34:50.169216 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 11:34:50.169230 kernel: TSC deadline timer available Jan 29 11:34:50.169245 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 11:34:50.169260 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 11:34:50.169275 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 29 11:34:50.169290 kernel: Booting paravirtualized kernel on KVM Jan 29 11:34:50.169305 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 11:34:50.169321 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 11:34:50.169339 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 11:34:50.169354 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 11:34:50.169369 kernel: pcpu-alloc: [0] 0 1 Jan 29 11:34:50.169383 kernel: kvm-guest: PV spinlocks enabled Jan 29 11:34:50.169398 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 11:34:50.169415 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:34:50.169431 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:34:50.169476 kernel: random: crng init done Jan 29 11:34:50.169496 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:34:50.169511 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 11:34:50.169526 kernel: Fallback order for Node 0: 0 Jan 29 11:34:50.169541 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jan 29 11:34:50.169565 kernel: Policy zone: DMA32 Jan 29 11:34:50.169605 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:34:50.169620 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2301K rwdata, 22736K rodata, 42972K init, 2220K bss, 125152K reserved, 0K cma-reserved) Jan 29 11:34:50.169635 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 11:34:50.169650 kernel: Kernel/User page tables isolation: enabled Jan 29 11:34:50.169668 kernel: ftrace: allocating 37923 entries in 149 pages Jan 29 11:34:50.169684 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 11:34:50.169700 kernel: Dynamic Preempt: voluntary Jan 29 11:34:50.169714 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:34:50.169731 kernel: rcu: RCU event tracing is enabled. Jan 29 11:34:50.169747 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 11:34:50.169761 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:34:50.169776 kernel: Rude variant of Tasks RCU enabled. Jan 29 11:34:50.169792 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:34:50.169810 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:34:50.169825 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 11:34:50.169841 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 29 11:34:50.169857 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:34:50.169872 kernel: Console: colour VGA+ 80x25 Jan 29 11:34:50.169887 kernel: printk: console [ttyS0] enabled Jan 29 11:34:50.169902 kernel: ACPI: Core revision 20230628 Jan 29 11:34:50.169917 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 29 11:34:50.169931 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 11:34:50.169949 kernel: x2apic enabled Jan 29 11:34:50.169966 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 11:34:50.169993 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Jan 29 11:34:50.170012 kernel: Calibrating delay loop (skipped) preset value.. 4999.98 BogoMIPS (lpj=2499994) Jan 29 11:34:50.170028 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 29 11:34:50.170045 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 29 11:34:50.170060 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 11:34:50.170076 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 11:34:50.170090 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 11:34:50.170105 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 11:34:50.170121 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 29 11:34:50.170137 kernel: RETBleed: Vulnerable Jan 29 11:34:50.170153 kernel: Speculative Store Bypass: Vulnerable Jan 29 11:34:50.170172 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 11:34:50.170188 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 11:34:50.170203 kernel: GDS: Unknown: Dependent on hypervisor status Jan 29 11:34:50.170219 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 11:34:50.170235 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 11:34:50.170248 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 11:34:50.170268 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 29 11:34:50.170283 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 29 11:34:50.170297 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 29 11:34:50.170311 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 29 11:34:50.170325 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 29 11:34:50.170338 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 29 11:34:50.170354 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 11:34:50.170372 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 29 11:34:50.170388 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 29 11:34:50.170406 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 29 11:34:50.170422 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 29 11:34:50.170462 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 29 11:34:50.170477 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 29 11:34:50.170494 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 29 11:34:50.170511 kernel: Freeing SMP alternatives memory: 32K Jan 29 11:34:50.170528 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:34:50.170544 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:34:50.170561 kernel: landlock: Up and running. Jan 29 11:34:50.170578 kernel: SELinux: Initializing. Jan 29 11:34:50.170594 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 11:34:50.170611 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 11:34:50.170628 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 29 11:34:50.170651 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:34:50.170669 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:34:50.170686 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:34:50.170711 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 29 11:34:50.170728 kernel: signal: max sigframe size: 3632 Jan 29 11:34:50.170745 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:34:50.170762 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:34:50.170779 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 29 11:34:50.170797 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:34:50.170819 kernel: smpboot: x86: Booting SMP configuration: Jan 29 11:34:50.170836 kernel: .... node #0, CPUs: #1 Jan 29 11:34:50.170854 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 29 11:34:50.170872 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 29 11:34:50.170888 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 11:34:50.170905 kernel: smpboot: Max logical packages: 1 Jan 29 11:34:50.170923 kernel: smpboot: Total of 2 processors activated (9999.97 BogoMIPS) Jan 29 11:34:50.170940 kernel: devtmpfs: initialized Jan 29 11:34:50.170957 kernel: x86/mm: Memory block size: 128MB Jan 29 11:34:50.170977 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:34:50.171268 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 11:34:50.171288 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:34:50.171306 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:34:50.171323 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:34:50.171340 kernel: audit: type=2000 audit(1738150489.403:1): state=initialized audit_enabled=0 res=1 Jan 29 11:34:50.171356 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:34:50.171373 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 11:34:50.171395 kernel: cpuidle: using governor menu Jan 29 11:34:50.171413 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:34:50.171430 kernel: dca service started, version 1.12.1 Jan 29 11:34:50.171460 kernel: PCI: Using configuration type 1 for base access Jan 29 11:34:50.171475 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 11:34:50.171491 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:34:50.171593 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:34:50.171612 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:34:50.171630 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:34:50.171652 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:34:50.171669 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:34:50.171728 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:34:50.171745 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:34:50.171762 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 29 11:34:50.171780 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 11:34:50.171797 kernel: ACPI: Interpreter enabled Jan 29 11:34:50.171814 kernel: ACPI: PM: (supports S0 S5) Jan 29 11:34:50.171831 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 11:34:50.171849 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 11:34:50.171871 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 11:34:50.171888 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 29 11:34:50.171905 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:34:50.172234 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:34:50.172495 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 29 11:34:50.172634 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 29 11:34:50.172651 kernel: acpiphp: Slot [3] registered Jan 29 11:34:50.172670 kernel: acpiphp: Slot [4] registered Jan 29 11:34:50.172684 kernel: acpiphp: Slot [5] registered Jan 29 11:34:50.172697 kernel: acpiphp: Slot [6] registered Jan 29 11:34:50.172711 kernel: acpiphp: Slot [7] registered Jan 29 11:34:50.172725 kernel: acpiphp: Slot [8] registered Jan 29 11:34:50.172739 kernel: acpiphp: Slot [9] registered Jan 29 11:34:50.172752 kernel: acpiphp: Slot [10] registered Jan 29 11:34:50.172766 kernel: acpiphp: Slot [11] registered Jan 29 11:34:50.172780 kernel: acpiphp: Slot [12] registered Jan 29 11:34:50.172797 kernel: acpiphp: Slot [13] registered Jan 29 11:34:50.172854 kernel: acpiphp: Slot [14] registered Jan 29 11:34:50.172868 kernel: acpiphp: Slot [15] registered Jan 29 11:34:50.172883 kernel: acpiphp: Slot [16] registered Jan 29 11:34:50.172896 kernel: acpiphp: Slot [17] registered Jan 29 11:34:50.172910 kernel: acpiphp: Slot [18] registered Jan 29 11:34:50.172924 kernel: acpiphp: Slot [19] registered Jan 29 11:34:50.172938 kernel: acpiphp: Slot [20] registered Jan 29 11:34:50.172952 kernel: acpiphp: Slot [21] registered Jan 29 11:34:50.173051 kernel: acpiphp: Slot [22] registered Jan 29 11:34:50.173070 kernel: acpiphp: Slot [23] registered Jan 29 11:34:50.173086 kernel: acpiphp: Slot [24] registered Jan 29 11:34:50.173100 kernel: acpiphp: Slot [25] registered Jan 29 11:34:50.173114 kernel: acpiphp: Slot [26] registered Jan 29 11:34:50.173128 kernel: acpiphp: Slot [27] registered Jan 29 11:34:50.173142 kernel: acpiphp: Slot [28] registered Jan 29 11:34:50.173155 kernel: acpiphp: Slot [29] registered Jan 29 11:34:50.173170 kernel: acpiphp: Slot [30] registered Jan 29 11:34:50.173184 kernel: acpiphp: Slot [31] registered Jan 29 11:34:50.173201 kernel: PCI host bridge to bus 0000:00 Jan 29 11:34:50.173348 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 11:34:50.173483 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 11:34:50.173613 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 11:34:50.173734 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 29 11:34:50.173857 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:34:50.174011 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 29 11:34:50.174196 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 29 11:34:50.174349 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 29 11:34:50.174509 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 29 11:34:50.174645 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jan 29 11:34:50.174782 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 29 11:34:50.174917 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 29 11:34:50.175052 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 29 11:34:50.175194 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 29 11:34:50.175328 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 29 11:34:50.175474 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 29 11:34:50.175609 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x180 took 12695 usecs Jan 29 11:34:50.175762 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 29 11:34:50.175897 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jan 29 11:34:50.176036 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 29 11:34:50.176476 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 11:34:50.176632 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 29 11:34:50.176844 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jan 29 11:34:50.176989 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 29 11:34:50.177124 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jan 29 11:34:50.177144 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 11:34:50.177167 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 11:34:50.177183 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 11:34:50.177199 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 11:34:50.177215 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 29 11:34:50.177231 kernel: iommu: Default domain type: Translated Jan 29 11:34:50.177246 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 11:34:50.177262 kernel: PCI: Using ACPI for IRQ routing Jan 29 11:34:50.177278 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 11:34:50.177294 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 11:34:50.177313 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jan 29 11:34:50.177459 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 29 11:34:50.177604 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 29 11:34:50.177740 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 11:34:50.177759 kernel: vgaarb: loaded Jan 29 11:34:50.177776 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 29 11:34:50.177792 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 29 11:34:50.177807 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 11:34:50.177823 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:34:50.177844 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:34:50.177859 kernel: pnp: PnP ACPI init Jan 29 11:34:50.177875 kernel: pnp: PnP ACPI: found 5 devices Jan 29 11:34:50.177891 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 11:34:50.177907 kernel: NET: Registered PF_INET protocol family Jan 29 11:34:50.177923 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:34:50.177939 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 29 11:34:50.177955 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:34:50.177971 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 11:34:50.177990 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 29 11:34:50.178005 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 29 11:34:50.178021 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 11:34:50.178037 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 11:34:50.178052 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:34:50.178068 kernel: NET: Registered PF_XDP protocol family Jan 29 11:34:50.178195 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 11:34:50.178318 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 11:34:50.178467 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 11:34:50.178591 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 29 11:34:50.178718 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 29 11:34:50.178737 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:34:50.181039 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 29 11:34:50.181063 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Jan 29 11:34:50.181077 kernel: clocksource: Switched to clocksource tsc Jan 29 11:34:50.181092 kernel: Initialise system trusted keyrings Jan 29 11:34:50.181112 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 29 11:34:50.181125 kernel: Key type asymmetric registered Jan 29 11:34:50.181138 kernel: Asymmetric key parser 'x509' registered Jan 29 11:34:50.181151 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 11:34:50.181164 kernel: io scheduler mq-deadline registered Jan 29 11:34:50.181178 kernel: io scheduler kyber registered Jan 29 11:34:50.181193 kernel: io scheduler bfq registered Jan 29 11:34:50.181206 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 11:34:50.181221 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:34:50.181240 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 11:34:50.181257 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 11:34:50.181273 kernel: i8042: Warning: Keylock active Jan 29 11:34:50.181288 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 11:34:50.181304 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 11:34:50.181496 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 29 11:34:50.181743 kernel: rtc_cmos 00:00: registered as rtc0 Jan 29 11:34:50.181882 kernel: rtc_cmos 00:00: setting system clock to 2025-01-29T11:34:49 UTC (1738150489) Jan 29 11:34:50.182017 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 29 11:34:50.182036 kernel: intel_pstate: CPU model not supported Jan 29 11:34:50.182053 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:34:50.182068 kernel: Segment Routing with IPv6 Jan 29 11:34:50.182083 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:34:50.182099 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:34:50.182114 kernel: Key type dns_resolver registered Jan 29 11:34:50.182130 kernel: IPI shorthand broadcast: enabled Jan 29 11:34:50.182146 kernel: sched_clock: Marking stable (736003379, 225811063)->(1077064462, -115250020) Jan 29 11:34:50.182166 kernel: registered taskstats version 1 Jan 29 11:34:50.182181 kernel: Loading compiled-in X.509 certificates Jan 29 11:34:50.182197 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: de92a621108c58f5771c86c5c3ccb1aa0728ed55' Jan 29 11:34:50.182212 kernel: Key type .fscrypt registered Jan 29 11:34:50.182227 kernel: Key type fscrypt-provisioning registered Jan 29 11:34:50.182243 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:34:50.182258 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:34:50.182274 kernel: ima: No architecture policies found Jan 29 11:34:50.182289 kernel: clk: Disabling unused clocks Jan 29 11:34:50.182308 kernel: Freeing unused kernel image (initmem) memory: 42972K Jan 29 11:34:50.182323 kernel: Write protecting the kernel read-only data: 36864k Jan 29 11:34:50.182339 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 29 11:34:50.182354 kernel: Run /init as init process Jan 29 11:34:50.182370 kernel: with arguments: Jan 29 11:34:50.182387 kernel: /init Jan 29 11:34:50.182402 kernel: with environment: Jan 29 11:34:50.182418 kernel: HOME=/ Jan 29 11:34:50.182433 kernel: TERM=linux Jan 29 11:34:50.182618 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:34:50.182665 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:34:50.182685 systemd[1]: Detected virtualization amazon. Jan 29 11:34:50.182701 systemd[1]: Detected architecture x86-64. Jan 29 11:34:50.182717 systemd[1]: Running in initrd. Jan 29 11:34:50.182732 systemd[1]: No hostname configured, using default hostname. Jan 29 11:34:50.182748 systemd[1]: Hostname set to . Jan 29 11:34:50.182767 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:34:50.182783 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:34:50.182837 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:34:50.182854 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:34:50.182871 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:34:50.182888 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:34:50.182904 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:34:50.182920 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:34:50.182942 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:34:50.182958 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:34:50.182975 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:34:50.182990 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 11:34:50.183007 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:34:50.183023 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:34:50.183039 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:34:50.183058 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:34:50.183142 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:34:50.183161 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:34:50.183178 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:34:50.183195 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:34:50.183211 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:34:50.183227 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:34:50.183243 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:34:50.183263 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:34:50.183280 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:34:50.183296 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:34:50.183313 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:34:50.183329 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:34:50.183349 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:34:50.183368 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:34:50.183385 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:34:50.183401 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:34:50.183418 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:34:50.183501 systemd-journald[179]: Collecting audit messages is disabled. Jan 29 11:34:50.183543 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:34:50.183560 systemd-journald[179]: Journal started Jan 29 11:34:50.183594 systemd-journald[179]: Runtime Journal (/run/log/journal/ec22e5dbad5f4472fafde3ec12a5a43b) is 4.8M, max 38.6M, 33.7M free. Jan 29 11:34:50.187328 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:34:50.197470 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:34:50.190183 systemd-modules-load[180]: Inserted module 'overlay' Jan 29 11:34:50.209837 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:34:50.214906 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:34:50.355230 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:34:50.355277 kernel: Bridge firewalling registered Jan 29 11:34:50.245762 systemd-modules-load[180]: Inserted module 'br_netfilter' Jan 29 11:34:50.376892 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:34:50.384866 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:34:50.418857 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:34:50.424666 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:34:50.427142 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:34:50.428159 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:34:50.438257 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:34:50.467821 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:34:50.469405 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:34:50.480752 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:34:50.483598 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:34:50.490721 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:34:50.513520 dracut-cmdline[214]: dracut-dracut-053 Jan 29 11:34:50.518353 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:34:50.543384 systemd-resolved[212]: Positive Trust Anchors: Jan 29 11:34:50.543408 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:34:50.543473 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:34:50.557018 systemd-resolved[212]: Defaulting to hostname 'linux'. Jan 29 11:34:50.559403 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:34:50.560712 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:34:50.615474 kernel: SCSI subsystem initialized Jan 29 11:34:50.626479 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:34:50.638471 kernel: iscsi: registered transport (tcp) Jan 29 11:34:50.659471 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:34:50.659667 kernel: QLogic iSCSI HBA Driver Jan 29 11:34:50.706324 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:34:50.711624 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:34:50.740765 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:34:50.740845 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:34:50.740866 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:34:50.783478 kernel: raid6: avx512x4 gen() 16806 MB/s Jan 29 11:34:50.800469 kernel: raid6: avx512x2 gen() 15537 MB/s Jan 29 11:34:50.817513 kernel: raid6: avx512x1 gen() 8225 MB/s Jan 29 11:34:50.834462 kernel: raid6: avx2x4 gen() 16565 MB/s Jan 29 11:34:50.851473 kernel: raid6: avx2x2 gen() 15020 MB/s Jan 29 11:34:50.868468 kernel: raid6: avx2x1 gen() 12474 MB/s Jan 29 11:34:50.868561 kernel: raid6: using algorithm avx512x4 gen() 16806 MB/s Jan 29 11:34:50.885468 kernel: raid6: .... xor() 6478 MB/s, rmw enabled Jan 29 11:34:50.885825 kernel: raid6: using avx512x2 recovery algorithm Jan 29 11:34:50.913527 kernel: xor: automatically using best checksumming function avx Jan 29 11:34:51.113470 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:34:51.125672 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:34:51.130667 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:34:51.176423 systemd-udevd[398]: Using default interface naming scheme 'v255'. Jan 29 11:34:51.188113 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:34:51.201607 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:34:51.230173 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Jan 29 11:34:51.296922 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:34:51.306647 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:34:51.372938 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:34:51.381745 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:34:51.411336 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:34:51.415671 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:34:51.417627 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:34:51.422495 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:34:51.434331 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:34:51.481281 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 29 11:34:51.501965 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 29 11:34:51.504185 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 29 11:34:51.504816 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:81:d5:52:1b:69 Jan 29 11:34:51.484219 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:34:51.509465 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 11:34:51.516920 (udev-worker)[447]: Network interface NamePolicy= disabled on kernel command line. Jan 29 11:34:51.528276 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:34:51.528451 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:34:51.534506 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:34:51.535844 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:34:51.536072 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:34:51.541813 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:34:51.556591 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:34:51.561538 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 11:34:51.561612 kernel: AES CTR mode by8 optimization enabled Jan 29 11:34:51.619878 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 29 11:34:51.620167 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 29 11:34:51.635492 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 29 11:34:51.643225 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:34:51.643291 kernel: GPT:9289727 != 16777215 Jan 29 11:34:51.643311 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:34:51.643329 kernel: GPT:9289727 != 16777215 Jan 29 11:34:51.643346 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:34:51.643363 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 11:34:51.713550 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:34:51.724866 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:34:51.754354 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (461) Jan 29 11:34:51.786989 kernel: BTRFS: device fsid 5ba3c9ea-61f2-4fe6-a507-2966757f6d44 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (459) Jan 29 11:34:51.795826 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:34:51.866911 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 29 11:34:51.885219 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 29 11:34:51.897322 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 29 11:34:51.911671 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 29 11:34:51.911962 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 29 11:34:51.921652 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:34:51.937977 disk-uuid[632]: Primary Header is updated. Jan 29 11:34:51.937977 disk-uuid[632]: Secondary Entries is updated. Jan 29 11:34:51.937977 disk-uuid[632]: Secondary Header is updated. Jan 29 11:34:51.945465 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 11:34:51.951461 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 11:34:52.956140 disk-uuid[633]: The operation has completed successfully. Jan 29 11:34:52.958029 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 11:34:53.110281 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:34:53.110528 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:34:53.133747 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:34:53.150219 sh[893]: Success Jan 29 11:34:53.165462 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 29 11:34:53.285941 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:34:53.294584 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:34:53.297322 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:34:53.345243 kernel: BTRFS info (device dm-0): first mount of filesystem 5ba3c9ea-61f2-4fe6-a507-2966757f6d44 Jan 29 11:34:53.345307 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:34:53.345326 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:34:53.345345 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:34:53.346460 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:34:53.454474 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 11:34:53.456831 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:34:53.457673 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:34:53.463778 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:34:53.467652 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:34:53.491805 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:34:53.491873 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:34:53.491902 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 11:34:53.498122 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 11:34:53.511261 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:34:53.512574 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:34:53.517816 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:34:53.528729 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:34:53.643778 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:34:53.653698 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:34:53.691592 systemd-networkd[1086]: lo: Link UP Jan 29 11:34:53.691603 systemd-networkd[1086]: lo: Gained carrier Jan 29 11:34:53.695583 systemd-networkd[1086]: Enumeration completed Jan 29 11:34:53.696177 systemd-networkd[1086]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:34:53.696183 systemd-networkd[1086]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:34:53.696547 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:34:53.717332 systemd[1]: Reached target network.target - Network. Jan 29 11:34:53.720458 systemd-networkd[1086]: eth0: Link UP Jan 29 11:34:53.720464 systemd-networkd[1086]: eth0: Gained carrier Jan 29 11:34:53.720479 systemd-networkd[1086]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:34:53.745048 systemd-networkd[1086]: eth0: DHCPv4 address 172.31.25.200/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 29 11:34:53.861028 ignition[1018]: Ignition 2.20.0 Jan 29 11:34:53.861053 ignition[1018]: Stage: fetch-offline Jan 29 11:34:53.861322 ignition[1018]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:34:53.861336 ignition[1018]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 11:34:53.862565 ignition[1018]: Ignition finished successfully Jan 29 11:34:53.874963 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:34:53.884837 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 11:34:53.920098 ignition[1094]: Ignition 2.20.0 Jan 29 11:34:53.920112 ignition[1094]: Stage: fetch Jan 29 11:34:53.920561 ignition[1094]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:34:53.920575 ignition[1094]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 11:34:53.920683 ignition[1094]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 11:34:53.945882 ignition[1094]: PUT result: OK Jan 29 11:34:53.949976 ignition[1094]: parsed url from cmdline: "" Jan 29 11:34:53.949987 ignition[1094]: no config URL provided Jan 29 11:34:53.949996 ignition[1094]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:34:53.950012 ignition[1094]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:34:53.950036 ignition[1094]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 11:34:53.952484 ignition[1094]: PUT result: OK Jan 29 11:34:53.952559 ignition[1094]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 29 11:34:53.954294 ignition[1094]: GET result: OK Jan 29 11:34:53.954737 ignition[1094]: parsing config with SHA512: e9a23ce8911516b3cf6c497534cef73175bd7cb6b8de07a60b8af4dc389f276ba34c66c25a3cd67dd2ff1d6a76b0a1f0f530b2696b2e6dbd1f8f20eeed722505 Jan 29 11:34:53.966184 unknown[1094]: fetched base config from "system" Jan 29 11:34:53.966197 unknown[1094]: fetched base config from "system" Jan 29 11:34:53.966631 ignition[1094]: fetch: fetch complete Jan 29 11:34:53.966206 unknown[1094]: fetched user config from "aws" Jan 29 11:34:53.966638 ignition[1094]: fetch: fetch passed Jan 29 11:34:53.966693 ignition[1094]: Ignition finished successfully Jan 29 11:34:53.974764 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 11:34:53.981675 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:34:54.010266 ignition[1100]: Ignition 2.20.0 Jan 29 11:34:54.010346 ignition[1100]: Stage: kargs Jan 29 11:34:54.010988 ignition[1100]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:34:54.010998 ignition[1100]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 11:34:54.011118 ignition[1100]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 11:34:54.012804 ignition[1100]: PUT result: OK Jan 29 11:34:54.021163 ignition[1100]: kargs: kargs passed Jan 29 11:34:54.021244 ignition[1100]: Ignition finished successfully Jan 29 11:34:54.035196 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:34:54.047124 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:34:54.094921 ignition[1106]: Ignition 2.20.0 Jan 29 11:34:54.094935 ignition[1106]: Stage: disks Jan 29 11:34:54.095343 ignition[1106]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:34:54.095352 ignition[1106]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 11:34:54.095581 ignition[1106]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 11:34:54.096882 ignition[1106]: PUT result: OK Jan 29 11:34:54.102941 ignition[1106]: disks: disks passed Jan 29 11:34:54.102997 ignition[1106]: Ignition finished successfully Jan 29 11:34:54.104899 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:34:54.107510 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:34:54.108743 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:34:54.111071 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:34:54.112296 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:34:54.125695 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:34:54.134686 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:34:54.183923 systemd-fsck[1114]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 11:34:54.194965 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:34:54.209355 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:34:54.393529 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 2fbf9359-701e-4995-b3f7-74280bd2b1c9 r/w with ordered data mode. Quota mode: none. Jan 29 11:34:54.394281 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:34:54.397659 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:34:54.412662 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:34:54.422704 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:34:54.426877 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 11:34:54.426947 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:34:54.426983 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:34:54.445876 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:34:54.464666 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1133) Jan 29 11:34:54.465533 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:34:54.471497 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:34:54.471549 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:34:54.471569 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 11:34:54.488525 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 11:34:54.490760 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:34:54.779988 initrd-setup-root[1157]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:34:54.788513 initrd-setup-root[1164]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:34:54.796452 initrd-setup-root[1171]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:34:54.803361 initrd-setup-root[1178]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:34:55.128191 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:34:55.139576 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:34:55.143477 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:34:55.154637 systemd-networkd[1086]: eth0: Gained IPv6LL Jan 29 11:34:55.203472 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:34:55.203735 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:34:55.254256 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:34:55.262300 ignition[1247]: INFO : Ignition 2.20.0 Jan 29 11:34:55.262300 ignition[1247]: INFO : Stage: mount Jan 29 11:34:55.264533 ignition[1247]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:34:55.264533 ignition[1247]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 11:34:55.264533 ignition[1247]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 11:34:55.268733 ignition[1247]: INFO : PUT result: OK Jan 29 11:34:55.276929 ignition[1247]: INFO : mount: mount passed Jan 29 11:34:55.276929 ignition[1247]: INFO : Ignition finished successfully Jan 29 11:34:55.279139 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:34:55.289641 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:34:55.405530 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:34:55.427469 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1257) Jan 29 11:34:55.427553 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:34:55.429083 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:34:55.429107 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 11:34:55.440469 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 11:34:55.447156 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:34:55.482967 ignition[1274]: INFO : Ignition 2.20.0 Jan 29 11:34:55.482967 ignition[1274]: INFO : Stage: files Jan 29 11:34:55.482967 ignition[1274]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:34:55.482967 ignition[1274]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 11:34:55.482967 ignition[1274]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 11:34:55.489843 ignition[1274]: INFO : PUT result: OK Jan 29 11:34:55.492215 ignition[1274]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:34:55.494269 ignition[1274]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:34:55.494269 ignition[1274]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:34:55.525638 ignition[1274]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:34:55.527240 ignition[1274]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:34:55.528762 unknown[1274]: wrote ssh authorized keys file for user: core Jan 29 11:34:55.530314 ignition[1274]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:34:55.532837 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:34:55.534911 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:34:55.537104 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:34:55.539112 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:34:55.540919 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:34:55.544446 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:34:55.544446 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:34:55.544446 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 29 11:34:55.910955 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 29 11:34:56.273212 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:34:56.275965 ignition[1274]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:34:56.278509 ignition[1274]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:34:56.278509 ignition[1274]: INFO : files: files passed Jan 29 11:34:56.278509 ignition[1274]: INFO : Ignition finished successfully Jan 29 11:34:56.285794 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:34:56.292883 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:34:56.303702 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:34:56.315245 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:34:56.315554 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:34:56.325520 initrd-setup-root-after-ignition[1302]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:34:56.327616 initrd-setup-root-after-ignition[1302]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:34:56.331997 initrd-setup-root-after-ignition[1306]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:34:56.336601 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:34:56.339725 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:34:56.346618 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:34:56.401907 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:34:56.402289 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:34:56.407794 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:34:56.409270 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:34:56.413991 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:34:56.424734 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:34:56.469361 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:34:56.478268 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:34:56.528578 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:34:56.528831 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:34:56.535262 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:34:56.538705 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:34:56.540920 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:34:56.544154 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:34:56.545876 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:34:56.547265 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:34:56.551545 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:34:56.553553 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:34:56.559487 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:34:56.563860 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:34:56.566005 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:34:56.567547 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:34:56.570422 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:34:56.573094 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:34:56.573265 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:34:56.576951 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:34:56.582346 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:34:56.582661 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:34:56.585493 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:34:56.589515 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:34:56.589682 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:34:56.601315 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:34:56.613937 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:34:56.621132 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:34:56.622481 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:34:56.641464 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:34:56.641634 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:34:56.646745 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:34:56.654912 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:34:56.661794 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:34:56.663387 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:34:56.668048 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:34:56.669398 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:34:56.682640 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:34:56.701968 ignition[1326]: INFO : Ignition 2.20.0 Jan 29 11:34:56.705971 ignition[1326]: INFO : Stage: umount Jan 29 11:34:56.705971 ignition[1326]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:34:56.705971 ignition[1326]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 11:34:56.711530 ignition[1326]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 11:34:56.708020 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:34:56.721470 ignition[1326]: INFO : PUT result: OK Jan 29 11:34:56.724801 ignition[1326]: INFO : umount: umount passed Jan 29 11:34:56.724801 ignition[1326]: INFO : Ignition finished successfully Jan 29 11:34:56.729293 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:34:56.730941 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:34:56.737240 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:34:56.742286 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:34:56.745688 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:34:56.751401 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:34:56.760803 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:34:56.763897 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 11:34:56.763964 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 11:34:56.766486 systemd[1]: Stopped target network.target - Network. Jan 29 11:34:56.769683 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:34:56.769786 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:34:56.773108 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:34:56.774642 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:34:56.779325 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:34:56.785096 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:34:56.792920 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:34:56.796852 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:34:56.796920 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:34:56.803717 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:34:56.803791 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:34:56.811544 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:34:56.811640 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:34:56.817079 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:34:56.817169 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:34:56.819845 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:34:56.824980 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:34:56.838334 systemd-networkd[1086]: eth0: DHCPv6 lease lost Jan 29 11:34:56.839981 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:34:56.840229 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:34:56.850922 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:34:56.851093 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:34:56.856349 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:34:56.856424 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:34:56.863690 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:34:56.865325 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:34:56.866513 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:34:56.868317 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:34:56.868371 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:34:56.870315 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:34:56.871408 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:34:56.872627 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:34:56.872673 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:34:56.881652 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:34:56.892628 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:34:56.894721 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:34:56.914996 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:34:56.915328 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:34:56.929921 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:34:56.930067 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:34:56.934779 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:34:56.934847 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:34:56.939226 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:34:56.939434 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:34:56.946246 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:34:56.946338 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:34:56.951798 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:34:56.951887 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:34:56.958686 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:34:56.960551 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:34:56.975735 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:34:56.977481 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:34:56.977735 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:34:56.984460 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:34:56.984634 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:34:56.995749 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:34:56.995856 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:34:57.007804 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:34:57.007904 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:34:57.012146 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:34:57.018778 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:34:57.042594 systemd[1]: Switching root. Jan 29 11:34:57.077853 systemd-journald[179]: Journal stopped Jan 29 11:34:59.248631 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Jan 29 11:34:59.249017 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:34:59.249045 kernel: SELinux: policy capability open_perms=1 Jan 29 11:34:59.249065 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:34:59.249169 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:34:59.249256 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:34:59.249277 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:34:59.249305 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:34:59.249327 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:34:59.249349 kernel: audit: type=1403 audit(1738150497.465:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:34:59.249381 systemd[1]: Successfully loaded SELinux policy in 72.055ms. Jan 29 11:34:59.249496 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 33.120ms. Jan 29 11:34:59.249524 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:34:59.249687 systemd[1]: Detected virtualization amazon. Jan 29 11:34:59.249716 systemd[1]: Detected architecture x86-64. Jan 29 11:34:59.249739 systemd[1]: Detected first boot. Jan 29 11:34:59.249769 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:34:59.249793 zram_generator::config[1369]: No configuration found. Jan 29 11:34:59.249819 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:34:59.249844 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:34:59.249874 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:34:59.249977 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:34:59.250007 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:34:59.250031 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:34:59.250056 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:34:59.250080 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:34:59.250105 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:34:59.250139 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:34:59.250163 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:34:59.250191 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:34:59.250216 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:34:59.250240 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:34:59.250264 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:34:59.250405 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:34:59.250435 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:34:59.250471 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:34:59.250489 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 11:34:59.250514 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:34:59.250537 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:34:59.250556 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:34:59.250576 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:34:59.250595 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:34:59.250617 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:34:59.250639 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:34:59.250660 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:34:59.250680 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:34:59.250702 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:34:59.250720 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:34:59.250739 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:34:59.250760 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:34:59.250780 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:34:59.250800 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:34:59.250822 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:34:59.250844 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:34:59.250864 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:34:59.250886 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:34:59.250903 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:34:59.251005 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:34:59.251028 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:34:59.251056 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:34:59.251075 systemd[1]: Reached target machines.target - Containers. Jan 29 11:34:59.251094 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:34:59.251114 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:34:59.251139 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:34:59.251159 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:34:59.251179 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:34:59.251200 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:34:59.251223 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:34:59.251243 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:34:59.251265 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:34:59.251286 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:34:59.251306 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:34:59.251331 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:34:59.251350 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:34:59.251370 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:34:59.251391 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:34:59.258520 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:34:59.258567 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:34:59.258594 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:34:59.258617 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:34:59.258640 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:34:59.258669 systemd[1]: Stopped verity-setup.service. Jan 29 11:34:59.258694 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:34:59.258717 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:34:59.258739 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:34:59.258761 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:34:59.258783 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:34:59.258805 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:34:59.258827 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:34:59.258853 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:34:59.258876 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:34:59.258899 kernel: fuse: init (API version 7.39) Jan 29 11:34:59.258923 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:34:59.259199 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:34:59.259364 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:34:59.259387 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:34:59.259408 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:34:59.259431 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:34:59.261489 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:34:59.261527 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:34:59.261551 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:34:59.261578 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:34:59.261598 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:34:59.261626 kernel: loop: module loaded Jan 29 11:34:59.261647 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:34:59.261667 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:34:59.261687 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:34:59.261708 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:34:59.261770 systemd-journald[1448]: Collecting audit messages is disabled. Jan 29 11:34:59.261808 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:34:59.261829 systemd-journald[1448]: Journal started Jan 29 11:34:59.261868 systemd-journald[1448]: Runtime Journal (/run/log/journal/ec22e5dbad5f4472fafde3ec12a5a43b) is 4.8M, max 38.6M, 33.7M free. Jan 29 11:34:59.266260 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:34:58.626392 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:34:58.677812 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 29 11:34:58.678208 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:34:59.274749 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:34:59.276747 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:34:59.305168 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:34:59.330695 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:34:59.330745 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:34:59.347474 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:34:59.360656 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:34:59.360731 kernel: ACPI: bus type drm_connector registered Jan 29 11:34:59.368487 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:34:59.371335 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:34:59.375468 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:34:59.375737 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:34:59.378092 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:34:59.378242 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:34:59.381120 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:34:59.382860 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:34:59.384622 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:34:59.428007 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:34:59.467207 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:34:59.476827 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:34:59.502632 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:34:59.507193 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:34:59.522012 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:34:59.534467 kernel: loop0: detected capacity change from 0 to 138184 Jan 29 11:34:59.535315 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:34:59.580998 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:34:59.597163 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:34:59.607622 systemd-journald[1448]: Time spent on flushing to /var/log/journal/ec22e5dbad5f4472fafde3ec12a5a43b is 80.730ms for 950 entries. Jan 29 11:34:59.607622 systemd-journald[1448]: System Journal (/var/log/journal/ec22e5dbad5f4472fafde3ec12a5a43b) is 8.0M, max 195.6M, 187.6M free. Jan 29 11:34:59.721873 systemd-journald[1448]: Received client request to flush runtime journal. Jan 29 11:34:59.721947 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:34:59.627962 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:34:59.632160 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:34:59.699411 udevadm[1508]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 11:34:59.710026 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:34:59.722244 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:34:59.730027 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:34:59.738477 kernel: loop1: detected capacity change from 0 to 140992 Jan 29 11:34:59.806554 systemd-tmpfiles[1515]: ACLs are not supported, ignoring. Jan 29 11:34:59.807069 systemd-tmpfiles[1515]: ACLs are not supported, ignoring. Jan 29 11:34:59.823085 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:34:59.878497 kernel: loop2: detected capacity change from 0 to 205544 Jan 29 11:34:59.993029 kernel: loop3: detected capacity change from 0 to 62848 Jan 29 11:35:00.119472 kernel: loop4: detected capacity change from 0 to 138184 Jan 29 11:35:00.171826 kernel: loop5: detected capacity change from 0 to 140992 Jan 29 11:35:00.201494 kernel: loop6: detected capacity change from 0 to 205544 Jan 29 11:35:00.244474 kernel: loop7: detected capacity change from 0 to 62848 Jan 29 11:35:00.255574 (sd-merge)[1522]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 29 11:35:00.258234 (sd-merge)[1522]: Merged extensions into '/usr'. Jan 29 11:35:00.268927 systemd[1]: Reloading requested from client PID 1477 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:35:00.269499 systemd[1]: Reloading... Jan 29 11:35:00.457600 zram_generator::config[1548]: No configuration found. Jan 29 11:35:00.907539 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:35:01.233713 systemd[1]: Reloading finished in 963 ms. Jan 29 11:35:01.287483 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:35:01.314728 systemd[1]: Starting ensure-sysext.service... Jan 29 11:35:01.349659 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:35:01.412598 systemd[1]: Reloading requested from client PID 1596 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:35:01.412630 systemd[1]: Reloading... Jan 29 11:35:01.609715 systemd-tmpfiles[1597]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:35:01.618315 systemd-tmpfiles[1597]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:35:01.620391 systemd-tmpfiles[1597]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:35:01.621123 systemd-tmpfiles[1597]: ACLs are not supported, ignoring. Jan 29 11:35:01.621212 systemd-tmpfiles[1597]: ACLs are not supported, ignoring. Jan 29 11:35:01.646776 systemd-tmpfiles[1597]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:35:01.646924 systemd-tmpfiles[1597]: Skipping /boot Jan 29 11:35:01.801474 zram_generator::config[1621]: No configuration found. Jan 29 11:35:01.820258 systemd-tmpfiles[1597]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:35:01.820430 systemd-tmpfiles[1597]: Skipping /boot Jan 29 11:35:02.186122 ldconfig[1470]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:35:02.250782 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:35:02.404718 systemd[1]: Reloading finished in 991 ms. Jan 29 11:35:02.424570 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:35:02.426421 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:35:02.443223 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:35:02.460839 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:35:02.464871 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:35:02.480599 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:35:02.497278 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:35:02.501052 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:35:02.507823 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:35:02.514063 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:35:02.514358 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:35:02.525058 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:35:02.534576 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:35:02.543841 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:35:02.545283 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:35:02.545815 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:35:02.556858 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:35:02.560404 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:35:02.562735 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:35:02.568105 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:35:02.569524 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:35:02.569967 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:35:02.570161 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:35:02.583888 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:35:02.584280 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:35:02.593113 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:35:02.604609 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:35:02.606602 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:35:02.606977 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:35:02.610297 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:35:02.611864 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:35:02.612208 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:35:02.619939 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:35:02.621623 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:35:02.628387 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:35:02.637529 systemd[1]: Finished ensure-sysext.service. Jan 29 11:35:02.653258 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:35:02.654543 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:35:02.660509 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:35:02.664249 systemd-udevd[1687]: Using default interface naming scheme 'v255'. Jan 29 11:35:02.668889 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:35:02.672272 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:35:02.684966 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:35:02.685254 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:35:02.687359 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:35:02.717844 augenrules[1717]: No rules Jan 29 11:35:02.721581 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:35:02.721926 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:35:02.724788 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:35:02.766111 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:35:02.771753 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:35:02.778660 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:35:02.796144 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:35:02.816942 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:35:02.960283 systemd-networkd[1734]: lo: Link UP Jan 29 11:35:02.960675 systemd-networkd[1734]: lo: Gained carrier Jan 29 11:35:02.961702 systemd-networkd[1734]: Enumeration completed Jan 29 11:35:02.961948 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:35:02.970680 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:35:03.006670 systemd-resolved[1681]: Positive Trust Anchors: Jan 29 11:35:03.006691 systemd-resolved[1681]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:35:03.006739 systemd-resolved[1681]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:35:03.012049 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 11:35:03.013068 (udev-worker)[1736]: Network interface NamePolicy= disabled on kernel command line. Jan 29 11:35:03.023251 systemd-resolved[1681]: Defaulting to hostname 'linux'. Jan 29 11:35:03.029628 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:35:03.030977 systemd[1]: Reached target network.target - Network. Jan 29 11:35:03.033079 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:35:03.090488 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 29 11:35:03.102126 kernel: ACPI: button: Power Button [PWRF] Jan 29 11:35:03.102301 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Jan 29 11:35:03.104772 kernel: ACPI: button: Sleep Button [SLPF] Jan 29 11:35:03.108653 systemd-networkd[1734]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:35:03.109113 systemd-networkd[1734]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:35:03.117112 systemd-networkd[1734]: eth0: Link UP Jan 29 11:35:03.117345 systemd-networkd[1734]: eth0: Gained carrier Jan 29 11:35:03.117374 systemd-networkd[1734]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:35:03.130829 systemd-networkd[1734]: eth0: DHCPv4 address 172.31.25.200/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 29 11:35:03.155556 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Jan 29 11:35:03.198487 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jan 29 11:35:03.263468 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1733) Jan 29 11:35:03.279730 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:35:03.349471 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 11:35:03.489430 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 29 11:35:03.615955 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:35:03.621987 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:35:03.633927 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:35:03.637773 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:35:03.685651 lvm[1848]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:35:03.692629 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:35:03.732722 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:35:03.734673 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:35:03.735941 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:35:03.737510 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:35:03.739252 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:35:03.745151 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:35:03.748957 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:35:03.753186 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:35:03.762567 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:35:03.762619 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:35:03.764799 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:35:03.768919 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:35:03.780330 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:35:03.799792 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:35:03.803144 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:35:03.806223 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:35:03.807984 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:35:03.809519 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:35:03.810822 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:35:03.810864 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:35:03.820146 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:35:03.826647 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 11:35:03.834725 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:35:03.840623 lvm[1855]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:35:03.841005 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:35:03.849233 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:35:03.851644 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:35:03.859238 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:35:03.870683 systemd[1]: Started ntpd.service - Network Time Service. Jan 29 11:35:03.889817 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 29 11:35:03.916675 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:35:03.920828 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:35:03.942153 jq[1859]: false Jan 29 11:35:03.937141 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:35:03.939037 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:35:03.939761 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:35:03.943736 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:35:03.967847 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:35:03.976305 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:35:03.977642 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:35:04.036140 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:35:04.051486 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:35:04.059234 dbus-daemon[1858]: [system] SELinux support is enabled Jan 29 11:35:04.087895 dbus-daemon[1858]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1734 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 29 11:35:04.063107 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:35:04.065233 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:35:04.126650 ntpd[1862]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:04:56 UTC 2025 (1): Starting Jan 29 11:35:04.126686 ntpd[1862]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 29 11:35:04.127341 ntpd[1862]: 29 Jan 11:35:04 ntpd[1862]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:04:56 UTC 2025 (1): Starting Jan 29 11:35:04.127341 ntpd[1862]: 29 Jan 11:35:04 ntpd[1862]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 29 11:35:04.127341 ntpd[1862]: 29 Jan 11:35:04 ntpd[1862]: ---------------------------------------------------- Jan 29 11:35:04.127341 ntpd[1862]: 29 Jan 11:35:04 ntpd[1862]: ntp-4 is maintained by Network Time Foundation, Jan 29 11:35:04.127341 ntpd[1862]: 29 Jan 11:35:04 ntpd[1862]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 29 11:35:04.127341 ntpd[1862]: 29 Jan 11:35:04 ntpd[1862]: corporation. Support and training for ntp-4 are Jan 29 11:35:04.127341 ntpd[1862]: 29 Jan 11:35:04 ntpd[1862]: available at https://www.nwtime.org/support Jan 29 11:35:04.127341 ntpd[1862]: 29 Jan 11:35:04 ntpd[1862]: ---------------------------------------------------- Jan 29 11:35:04.126697 ntpd[1862]: ---------------------------------------------------- Jan 29 11:35:04.127875 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:35:04.126705 ntpd[1862]: ntp-4 is maintained by Network Time Foundation, Jan 29 11:35:04.129316 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:35:04.126714 ntpd[1862]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 29 11:35:04.129927 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:35:04.126853 ntpd[1862]: corporation. Support and training for ntp-4 are Jan 29 11:35:04.146515 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:35:04.126864 ntpd[1862]: available at https://www.nwtime.org/support Jan 29 11:35:04.146551 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:35:04.126874 ntpd[1862]: ---------------------------------------------------- Jan 29 11:35:04.150167 dbus-daemon[1858]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 29 11:35:04.183832 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 29 11:35:04.195629 jq[1868]: true Jan 29 11:35:04.201953 ntpd[1862]: 29 Jan 11:35:04 ntpd[1862]: proto: precision = 0.092 usec (-23) Jan 29 11:35:04.201953 ntpd[1862]: 29 Jan 11:35:04 ntpd[1862]: basedate set to 2025-01-17 Jan 29 11:35:04.201953 ntpd[1862]: 29 Jan 11:35:04 ntpd[1862]: gps base set to 2025-01-19 (week 2350) Jan 29 11:35:04.184470 ntpd[1862]: proto: precision = 0.092 usec (-23) Jan 29 11:35:04.184920 ntpd[1862]: basedate set to 2025-01-17 Jan 29 11:35:04.184937 ntpd[1862]: gps base set to 2025-01-19 (week 2350) Jan 29 11:35:04.203736 (ntainerd)[1891]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:35:04.217623 ntpd[1862]: Listen and drop on 0 v6wildcard [::]:123 Jan 29 11:35:04.218244 ntpd[1862]: 29 Jan 11:35:04 ntpd[1862]: Listen and drop on 0 v6wildcard [::]:123 Jan 29 11:35:04.218244 ntpd[1862]: 29 Jan 11:35:04 ntpd[1862]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 29 11:35:04.217777 ntpd[1862]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 29 11:35:04.220166 ntpd[1862]: Listen normally on 2 lo 127.0.0.1:123 Jan 29 11:35:04.221786 ntpd[1862]: 29 Jan 11:35:04 ntpd[1862]: Listen normally on 2 lo 127.0.0.1:123 Jan 29 11:35:04.221786 ntpd[1862]: 29 Jan 11:35:04 ntpd[1862]: Listen normally on 3 eth0 172.31.25.200:123 Jan 29 11:35:04.220234 ntpd[1862]: Listen normally on 3 eth0 172.31.25.200:123 Jan 29 11:35:04.220427 ntpd[1862]: Listen normally on 4 lo [::1]:123 Jan 29 11:35:04.225464 extend-filesystems[1860]: Found loop4 Jan 29 11:35:04.225464 extend-filesystems[1860]: Found loop5 Jan 29 11:35:04.225464 extend-filesystems[1860]: Found loop6 Jan 29 11:35:04.225464 extend-filesystems[1860]: Found loop7 Jan 29 11:35:04.225464 extend-filesystems[1860]: Found nvme0n1 Jan 29 11:35:04.225464 extend-filesystems[1860]: Found nvme0n1p1 Jan 29 11:35:04.225464 extend-filesystems[1860]: Found nvme0n1p2 Jan 29 11:35:04.225464 extend-filesystems[1860]: Found nvme0n1p3 Jan 29 11:35:04.225464 extend-filesystems[1860]: Found usr Jan 29 11:35:04.225464 extend-filesystems[1860]: Found nvme0n1p4 Jan 29 11:35:04.225464 extend-filesystems[1860]: Found nvme0n1p6 Jan 29 11:35:04.225464 extend-filesystems[1860]: Found nvme0n1p7 Jan 29 11:35:04.225464 extend-filesystems[1860]: Found nvme0n1p9 Jan 29 11:35:04.255932 extend-filesystems[1860]: Checking size of /dev/nvme0n1p9 Jan 29 11:35:04.234856 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:35:04.257346 ntpd[1862]: 29 Jan 11:35:04 ntpd[1862]: Listen normally on 4 lo [::1]:123 Jan 29 11:35:04.257346 ntpd[1862]: 29 Jan 11:35:04 ntpd[1862]: bind(21) AF_INET6 fe80::481:d5ff:fe52:1b69%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 11:35:04.257346 ntpd[1862]: 29 Jan 11:35:04 ntpd[1862]: unable to create socket on eth0 (5) for fe80::481:d5ff:fe52:1b69%2#123 Jan 29 11:35:04.257346 ntpd[1862]: 29 Jan 11:35:04 ntpd[1862]: failed to init interface for address fe80::481:d5ff:fe52:1b69%2 Jan 29 11:35:04.257346 ntpd[1862]: 29 Jan 11:35:04 ntpd[1862]: Listening on routing socket on fd #21 for interface updates Jan 29 11:35:04.257346 ntpd[1862]: 29 Jan 11:35:04 ntpd[1862]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 11:35:04.257346 ntpd[1862]: 29 Jan 11:35:04 ntpd[1862]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 11:35:04.225617 ntpd[1862]: bind(21) AF_INET6 fe80::481:d5ff:fe52:1b69%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 11:35:04.235397 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:35:04.225666 ntpd[1862]: unable to create socket on eth0 (5) for fe80::481:d5ff:fe52:1b69%2#123 Jan 29 11:35:04.225683 ntpd[1862]: failed to init interface for address fe80::481:d5ff:fe52:1b69%2 Jan 29 11:35:04.225800 ntpd[1862]: Listening on routing socket on fd #21 for interface updates Jan 29 11:35:04.253113 ntpd[1862]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 11:35:04.253156 ntpd[1862]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 11:35:04.288499 update_engine[1867]: I20250129 11:35:04.286846 1867 main.cc:92] Flatcar Update Engine starting Jan 29 11:35:04.301967 jq[1896]: true Jan 29 11:35:04.302669 update_engine[1867]: I20250129 11:35:04.302604 1867 update_check_scheduler.cc:74] Next update check in 6m39s Jan 29 11:35:04.313131 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:35:04.329705 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:35:04.332067 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 29 11:35:04.429117 extend-filesystems[1860]: Resized partition /dev/nvme0n1p9 Jan 29 11:35:04.450567 extend-filesystems[1912]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:35:04.465464 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 29 11:35:04.497325 systemd-logind[1866]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 11:35:04.497361 systemd-logind[1866]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 29 11:35:04.497388 systemd-logind[1866]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 11:35:04.506918 systemd-logind[1866]: New seat seat0. Jan 29 11:35:04.508024 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:35:04.522962 dbus-daemon[1858]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 29 11:35:04.524796 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 29 11:35:04.530819 dbus-daemon[1858]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1894 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 29 11:35:04.545476 coreos-metadata[1857]: Jan 29 11:35:04.543 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 29 11:35:04.545476 coreos-metadata[1857]: Jan 29 11:35:04.544 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 29 11:35:04.545476 coreos-metadata[1857]: Jan 29 11:35:04.544 INFO Fetch successful Jan 29 11:35:04.545476 coreos-metadata[1857]: Jan 29 11:35:04.544 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 29 11:35:04.551109 coreos-metadata[1857]: Jan 29 11:35:04.545 INFO Fetch successful Jan 29 11:35:04.551109 coreos-metadata[1857]: Jan 29 11:35:04.545 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 29 11:35:04.551109 coreos-metadata[1857]: Jan 29 11:35:04.546 INFO Fetch successful Jan 29 11:35:04.551109 coreos-metadata[1857]: Jan 29 11:35:04.546 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 29 11:35:04.551109 coreos-metadata[1857]: Jan 29 11:35:04.546 INFO Fetch successful Jan 29 11:35:04.551109 coreos-metadata[1857]: Jan 29 11:35:04.546 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 29 11:35:04.551526 systemd[1]: Starting polkit.service - Authorization Manager... Jan 29 11:35:04.572293 coreos-metadata[1857]: Jan 29 11:35:04.568 INFO Fetch failed with 404: resource not found Jan 29 11:35:04.572293 coreos-metadata[1857]: Jan 29 11:35:04.568 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 29 11:35:04.572293 coreos-metadata[1857]: Jan 29 11:35:04.568 INFO Fetch successful Jan 29 11:35:04.572293 coreos-metadata[1857]: Jan 29 11:35:04.569 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 29 11:35:04.572642 coreos-metadata[1857]: Jan 29 11:35:04.572 INFO Fetch successful Jan 29 11:35:04.572642 coreos-metadata[1857]: Jan 29 11:35:04.572 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 29 11:35:04.577470 coreos-metadata[1857]: Jan 29 11:35:04.575 INFO Fetch successful Jan 29 11:35:04.577470 coreos-metadata[1857]: Jan 29 11:35:04.575 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 29 11:35:04.577470 coreos-metadata[1857]: Jan 29 11:35:04.576 INFO Fetch successful Jan 29 11:35:04.577470 coreos-metadata[1857]: Jan 29 11:35:04.576 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 29 11:35:04.579466 coreos-metadata[1857]: Jan 29 11:35:04.578 INFO Fetch successful Jan 29 11:35:04.601465 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 29 11:35:04.618853 polkitd[1932]: Started polkitd version 121 Jan 29 11:35:04.634208 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1736) Jan 29 11:35:04.634292 bash[1933]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:35:04.627263 systemd-networkd[1734]: eth0: Gained IPv6LL Jan 29 11:35:04.634817 extend-filesystems[1912]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 29 11:35:04.634817 extend-filesystems[1912]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 11:35:04.634817 extend-filesystems[1912]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 29 11:35:04.633870 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:35:04.691704 extend-filesystems[1860]: Resized filesystem in /dev/nvme0n1p9 Jan 29 11:35:04.637232 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:35:04.637877 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:35:04.673509 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:35:04.714510 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:35:04.736207 polkitd[1932]: Loading rules from directory /etc/polkit-1/rules.d Jan 29 11:35:04.729363 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 29 11:35:04.736296 polkitd[1932]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 29 11:35:04.742090 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:35:04.747671 polkitd[1932]: Finished loading, compiling and executing 2 rules Jan 29 11:35:04.752795 dbus-daemon[1858]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 29 11:35:04.755857 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:35:04.765812 polkitd[1932]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 29 11:35:04.766778 systemd[1]: Starting sshkeys.service... Jan 29 11:35:04.770418 systemd[1]: Started polkit.service - Authorization Manager. Jan 29 11:35:04.845112 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 11:35:04.851198 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:35:04.958514 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:35:04.982851 systemd-hostnamed[1894]: Hostname set to (transient) Jan 29 11:35:04.995578 systemd-resolved[1681]: System hostname changed to 'ip-172-31-25-200'. Jan 29 11:35:05.000030 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 11:35:05.017855 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 11:35:05.094094 amazon-ssm-agent[1956]: Initializing new seelog logger Jan 29 11:35:05.094094 amazon-ssm-agent[1956]: New Seelog Logger Creation Complete Jan 29 11:35:05.094094 amazon-ssm-agent[1956]: 2025/01/29 11:35:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 11:35:05.094094 amazon-ssm-agent[1956]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 11:35:05.094094 amazon-ssm-agent[1956]: 2025/01/29 11:35:05 processing appconfig overrides Jan 29 11:35:05.103730 amazon-ssm-agent[1956]: 2025/01/29 11:35:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 11:35:05.103730 amazon-ssm-agent[1956]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 11:35:05.103730 amazon-ssm-agent[1956]: 2025/01/29 11:35:05 processing appconfig overrides Jan 29 11:35:05.111166 amazon-ssm-agent[1956]: 2025/01/29 11:35:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 11:35:05.111166 amazon-ssm-agent[1956]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 11:35:05.111166 amazon-ssm-agent[1956]: 2025/01/29 11:35:05 processing appconfig overrides Jan 29 11:35:05.111166 amazon-ssm-agent[1956]: 2025-01-29 11:35:05 INFO Proxy environment variables: Jan 29 11:35:05.121077 amazon-ssm-agent[1956]: 2025/01/29 11:35:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 11:35:05.121077 amazon-ssm-agent[1956]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 11:35:05.121077 amazon-ssm-agent[1956]: 2025/01/29 11:35:05 processing appconfig overrides Jan 29 11:35:05.213715 amazon-ssm-agent[1956]: 2025-01-29 11:35:05 INFO https_proxy: Jan 29 11:35:05.315229 amazon-ssm-agent[1956]: 2025-01-29 11:35:05 INFO http_proxy: Jan 29 11:35:05.354523 locksmithd[1907]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:35:05.418472 amazon-ssm-agent[1956]: 2025-01-29 11:35:05 INFO no_proxy: Jan 29 11:35:05.426678 coreos-metadata[1995]: Jan 29 11:35:05.426 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 29 11:35:05.435121 coreos-metadata[1995]: Jan 29 11:35:05.434 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 29 11:35:05.441697 coreos-metadata[1995]: Jan 29 11:35:05.436 INFO Fetch successful Jan 29 11:35:05.441697 coreos-metadata[1995]: Jan 29 11:35:05.437 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 29 11:35:05.444213 coreos-metadata[1995]: Jan 29 11:35:05.442 INFO Fetch successful Jan 29 11:35:05.465843 unknown[1995]: wrote ssh authorized keys file for user: core Jan 29 11:35:05.520679 amazon-ssm-agent[1956]: 2025-01-29 11:35:05 INFO Checking if agent identity type OnPrem can be assumed Jan 29 11:35:05.555318 update-ssh-keys[2060]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:35:05.571114 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 11:35:05.600479 systemd[1]: Finished sshkeys.service. Jan 29 11:35:05.611100 containerd[1891]: time="2025-01-29T11:35:05.610982274Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 11:35:05.639939 amazon-ssm-agent[1956]: 2025-01-29 11:35:05 INFO Checking if agent identity type EC2 can be assumed Jan 29 11:35:05.741837 amazon-ssm-agent[1956]: 2025-01-29 11:35:05 INFO Agent will take identity from EC2 Jan 29 11:35:05.807860 containerd[1891]: time="2025-01-29T11:35:05.807795797Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:35:05.816531 containerd[1891]: time="2025-01-29T11:35:05.814631076Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:35:05.816531 containerd[1891]: time="2025-01-29T11:35:05.814687391Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:35:05.816531 containerd[1891]: time="2025-01-29T11:35:05.814716019Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:35:05.816531 containerd[1891]: time="2025-01-29T11:35:05.814928851Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:35:05.816531 containerd[1891]: time="2025-01-29T11:35:05.814951566Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:35:05.816531 containerd[1891]: time="2025-01-29T11:35:05.815021592Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:35:05.816531 containerd[1891]: time="2025-01-29T11:35:05.815039460Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:35:05.816531 containerd[1891]: time="2025-01-29T11:35:05.815247865Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:35:05.816531 containerd[1891]: time="2025-01-29T11:35:05.815265266Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:35:05.816531 containerd[1891]: time="2025-01-29T11:35:05.815283865Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:35:05.816531 containerd[1891]: time="2025-01-29T11:35:05.815297636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:35:05.817083 containerd[1891]: time="2025-01-29T11:35:05.815378867Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:35:05.817083 containerd[1891]: time="2025-01-29T11:35:05.815868812Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:35:05.817083 containerd[1891]: time="2025-01-29T11:35:05.816027072Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:35:05.817083 containerd[1891]: time="2025-01-29T11:35:05.816046573Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:35:05.817083 containerd[1891]: time="2025-01-29T11:35:05.816135116Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:35:05.817083 containerd[1891]: time="2025-01-29T11:35:05.816191531Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:35:05.842617 containerd[1891]: time="2025-01-29T11:35:05.838293958Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:35:05.842617 containerd[1891]: time="2025-01-29T11:35:05.838379317Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:35:05.842617 containerd[1891]: time="2025-01-29T11:35:05.838404904Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:35:05.842617 containerd[1891]: time="2025-01-29T11:35:05.838430444Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:35:05.842617 containerd[1891]: time="2025-01-29T11:35:05.839063326Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:35:05.842617 containerd[1891]: time="2025-01-29T11:35:05.839320115Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:35:05.842617 containerd[1891]: time="2025-01-29T11:35:05.839704181Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:35:05.842617 containerd[1891]: time="2025-01-29T11:35:05.839867361Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:35:05.842617 containerd[1891]: time="2025-01-29T11:35:05.840175854Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:35:05.842617 containerd[1891]: time="2025-01-29T11:35:05.840207222Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:35:05.842617 containerd[1891]: time="2025-01-29T11:35:05.840229413Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:35:05.842617 containerd[1891]: time="2025-01-29T11:35:05.840250248Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:35:05.842617 containerd[1891]: time="2025-01-29T11:35:05.840269063Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:35:05.842617 containerd[1891]: time="2025-01-29T11:35:05.840290851Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:35:05.844335 amazon-ssm-agent[1956]: 2025-01-29 11:35:05 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 11:35:05.845030 containerd[1891]: time="2025-01-29T11:35:05.840312275Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:35:05.845030 containerd[1891]: time="2025-01-29T11:35:05.840331144Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:35:05.845030 containerd[1891]: time="2025-01-29T11:35:05.840350044Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:35:05.845030 containerd[1891]: time="2025-01-29T11:35:05.840369492Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:35:05.845030 containerd[1891]: time="2025-01-29T11:35:05.840397733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:35:05.845030 containerd[1891]: time="2025-01-29T11:35:05.840505525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:35:05.845030 containerd[1891]: time="2025-01-29T11:35:05.840528178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:35:05.845030 containerd[1891]: time="2025-01-29T11:35:05.840547492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:35:05.845030 containerd[1891]: time="2025-01-29T11:35:05.840566651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:35:05.845030 containerd[1891]: time="2025-01-29T11:35:05.840586252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:35:05.845030 containerd[1891]: time="2025-01-29T11:35:05.840603998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:35:05.845030 containerd[1891]: time="2025-01-29T11:35:05.840631011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:35:05.845030 containerd[1891]: time="2025-01-29T11:35:05.840650479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:35:05.845030 containerd[1891]: time="2025-01-29T11:35:05.840682345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:35:05.846088 containerd[1891]: time="2025-01-29T11:35:05.840699636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:35:05.846088 containerd[1891]: time="2025-01-29T11:35:05.840722107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:35:05.846088 containerd[1891]: time="2025-01-29T11:35:05.840741120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:35:05.846088 containerd[1891]: time="2025-01-29T11:35:05.840762132Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:35:05.846088 containerd[1891]: time="2025-01-29T11:35:05.840793582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:35:05.846088 containerd[1891]: time="2025-01-29T11:35:05.840813654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:35:05.846088 containerd[1891]: time="2025-01-29T11:35:05.840830496Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:35:05.846088 containerd[1891]: time="2025-01-29T11:35:05.840887199Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:35:05.846088 containerd[1891]: time="2025-01-29T11:35:05.840913210Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:35:05.846088 containerd[1891]: time="2025-01-29T11:35:05.840929355Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:35:05.846088 containerd[1891]: time="2025-01-29T11:35:05.840948097Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:35:05.846088 containerd[1891]: time="2025-01-29T11:35:05.840962803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:35:05.846088 containerd[1891]: time="2025-01-29T11:35:05.840981213Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:35:05.846088 containerd[1891]: time="2025-01-29T11:35:05.840996089Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:35:05.847527 containerd[1891]: time="2025-01-29T11:35:05.841012358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:35:05.847573 containerd[1891]: time="2025-01-29T11:35:05.841753351Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:35:05.847573 containerd[1891]: time="2025-01-29T11:35:05.842182210Z" level=info msg="Connect containerd service" Jan 29 11:35:05.847573 containerd[1891]: time="2025-01-29T11:35:05.842575727Z" level=info msg="using legacy CRI server" Jan 29 11:35:05.847573 containerd[1891]: time="2025-01-29T11:35:05.842594213Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:35:05.847573 containerd[1891]: time="2025-01-29T11:35:05.843057281Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:35:05.847573 containerd[1891]: time="2025-01-29T11:35:05.845256233Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:35:05.847573 containerd[1891]: time="2025-01-29T11:35:05.846699841Z" level=info msg="Start subscribing containerd event" Jan 29 11:35:05.847573 containerd[1891]: time="2025-01-29T11:35:05.846877837Z" level=info msg="Start recovering state" Jan 29 11:35:05.847573 containerd[1891]: time="2025-01-29T11:35:05.846968881Z" level=info msg="Start event monitor" Jan 29 11:35:05.847573 containerd[1891]: time="2025-01-29T11:35:05.846999542Z" level=info msg="Start snapshots syncer" Jan 29 11:35:05.847573 containerd[1891]: time="2025-01-29T11:35:05.847012913Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:35:05.847573 containerd[1891]: time="2025-01-29T11:35:05.847023210Z" level=info msg="Start streaming server" Jan 29 11:35:05.851763 containerd[1891]: time="2025-01-29T11:35:05.851091654Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:35:05.851763 containerd[1891]: time="2025-01-29T11:35:05.851223126Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:35:05.851750 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:35:05.851969 containerd[1891]: time="2025-01-29T11:35:05.851923264Z" level=info msg="containerd successfully booted in 0.245065s" Jan 29 11:35:05.940201 amazon-ssm-agent[1956]: 2025-01-29 11:35:05 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 11:35:06.039328 amazon-ssm-agent[1956]: 2025-01-29 11:35:05 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 11:35:06.128543 amazon-ssm-agent[1956]: 2025-01-29 11:35:05 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 29 11:35:06.128543 amazon-ssm-agent[1956]: 2025-01-29 11:35:05 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 29 11:35:06.128543 amazon-ssm-agent[1956]: 2025-01-29 11:35:05 INFO [amazon-ssm-agent] Starting Core Agent Jan 29 11:35:06.128543 amazon-ssm-agent[1956]: 2025-01-29 11:35:05 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 29 11:35:06.128543 amazon-ssm-agent[1956]: 2025-01-29 11:35:05 INFO [Registrar] Starting registrar module Jan 29 11:35:06.128543 amazon-ssm-agent[1956]: 2025-01-29 11:35:05 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 29 11:35:06.128543 amazon-ssm-agent[1956]: 2025-01-29 11:35:06 INFO [EC2Identity] EC2 registration was successful. Jan 29 11:35:06.128543 amazon-ssm-agent[1956]: 2025-01-29 11:35:06 INFO [CredentialRefresher] credentialRefresher has started Jan 29 11:35:06.128543 amazon-ssm-agent[1956]: 2025-01-29 11:35:06 INFO [CredentialRefresher] Starting credentials refresher loop Jan 29 11:35:06.128543 amazon-ssm-agent[1956]: 2025-01-29 11:35:06 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 29 11:35:06.140658 amazon-ssm-agent[1956]: 2025-01-29 11:35:06 INFO [CredentialRefresher] Next credential rotation will be in 31.1499922708 minutes Jan 29 11:35:06.323823 sshd_keygen[1897]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:35:06.380801 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:35:06.402719 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:35:06.422452 systemd[1]: Started sshd@0-172.31.25.200:22-139.178.68.195:52236.service - OpenSSH per-connection server daemon (139.178.68.195:52236). Jan 29 11:35:06.445283 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:35:06.445682 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:35:06.459036 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:35:06.486411 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:35:06.498600 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:35:06.510157 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 11:35:06.513968 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:35:06.687290 sshd[2090]: Accepted publickey for core from 139.178.68.195 port 52236 ssh2: RSA SHA256:ucroplBE8DfUKARYUXLTx/d82p9pINzh/nPi5nZglro Jan 29 11:35:06.689006 sshd-session[2090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:35:06.710301 systemd-logind[1866]: New session 1 of user core. Jan 29 11:35:06.713293 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:35:06.722742 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:35:06.744898 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:35:06.765633 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:35:06.795520 (systemd)[2101]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:35:07.014474 systemd[2101]: Queued start job for default target default.target. Jan 29 11:35:07.025878 systemd[2101]: Created slice app.slice - User Application Slice. Jan 29 11:35:07.025920 systemd[2101]: Reached target paths.target - Paths. Jan 29 11:35:07.025943 systemd[2101]: Reached target timers.target - Timers. Jan 29 11:35:07.027414 systemd[2101]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:35:07.050941 systemd[2101]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:35:07.051090 systemd[2101]: Reached target sockets.target - Sockets. Jan 29 11:35:07.051110 systemd[2101]: Reached target basic.target - Basic System. Jan 29 11:35:07.051163 systemd[2101]: Reached target default.target - Main User Target. Jan 29 11:35:07.051341 systemd[2101]: Startup finished in 244ms. Jan 29 11:35:07.051750 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:35:07.059019 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:35:07.127296 ntpd[1862]: Listen normally on 6 eth0 [fe80::481:d5ff:fe52:1b69%2]:123 Jan 29 11:35:07.127827 ntpd[1862]: 29 Jan 11:35:07 ntpd[1862]: Listen normally on 6 eth0 [fe80::481:d5ff:fe52:1b69%2]:123 Jan 29 11:35:07.145086 amazon-ssm-agent[1956]: 2025-01-29 11:35:07 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 29 11:35:07.230213 systemd[1]: Started sshd@1-172.31.25.200:22-139.178.68.195:42402.service - OpenSSH per-connection server daemon (139.178.68.195:42402). Jan 29 11:35:07.245369 amazon-ssm-agent[1956]: 2025-01-29 11:35:07 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2111) started Jan 29 11:35:07.353889 amazon-ssm-agent[1956]: 2025-01-29 11:35:07 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 29 11:35:07.465922 sshd[2119]: Accepted publickey for core from 139.178.68.195 port 42402 ssh2: RSA SHA256:ucroplBE8DfUKARYUXLTx/d82p9pINzh/nPi5nZglro Jan 29 11:35:07.469210 sshd-session[2119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:35:07.480543 systemd-logind[1866]: New session 2 of user core. Jan 29 11:35:07.490890 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:35:07.643945 sshd[2127]: Connection closed by 139.178.68.195 port 42402 Jan 29 11:35:07.645237 sshd-session[2119]: pam_unix(sshd:session): session closed for user core Jan 29 11:35:07.654088 systemd-logind[1866]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:35:07.655464 systemd[1]: sshd@1-172.31.25.200:22-139.178.68.195:42402.service: Deactivated successfully. Jan 29 11:35:07.658958 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:35:07.660978 systemd-logind[1866]: Removed session 2. Jan 29 11:35:07.691968 systemd[1]: Started sshd@2-172.31.25.200:22-139.178.68.195:42406.service - OpenSSH per-connection server daemon (139.178.68.195:42406). Jan 29 11:35:07.879428 sshd[2132]: Accepted publickey for core from 139.178.68.195 port 42406 ssh2: RSA SHA256:ucroplBE8DfUKARYUXLTx/d82p9pINzh/nPi5nZglro Jan 29 11:35:07.882604 sshd-session[2132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:35:07.912137 systemd-logind[1866]: New session 3 of user core. Jan 29 11:35:07.915732 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:35:07.921638 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:35:07.929074 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:35:07.930820 systemd[1]: Startup finished in 943ms (kernel) + 7.648s (initrd) + 10.535s (userspace) = 19.127s. Jan 29 11:35:07.940480 (kubelet)[2138]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:35:08.076789 sshd[2140]: Connection closed by 139.178.68.195 port 42406 Jan 29 11:35:08.080710 sshd-session[2132]: pam_unix(sshd:session): session closed for user core Jan 29 11:35:08.094652 systemd-logind[1866]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:35:08.095420 systemd[1]: sshd@2-172.31.25.200:22-139.178.68.195:42406.service: Deactivated successfully. Jan 29 11:35:08.102890 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:35:08.117095 systemd-logind[1866]: Removed session 3. Jan 29 11:35:09.245236 kubelet[2138]: E0129 11:35:09.245179 2138 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:35:09.248533 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:35:09.249003 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:35:09.249433 systemd[1]: kubelet.service: Consumed 1.024s CPU time. Jan 29 11:35:11.999134 systemd-resolved[1681]: Clock change detected. Flushing caches. Jan 29 11:35:18.985539 systemd[1]: Started sshd@3-172.31.25.200:22-139.178.68.195:33438.service - OpenSSH per-connection server daemon (139.178.68.195:33438). Jan 29 11:35:19.201194 sshd[2155]: Accepted publickey for core from 139.178.68.195 port 33438 ssh2: RSA SHA256:ucroplBE8DfUKARYUXLTx/d82p9pINzh/nPi5nZglro Jan 29 11:35:19.203019 sshd-session[2155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:35:19.217499 systemd-logind[1866]: New session 4 of user core. Jan 29 11:35:19.227270 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:35:19.360001 sshd[2157]: Connection closed by 139.178.68.195 port 33438 Jan 29 11:35:19.360677 sshd-session[2155]: pam_unix(sshd:session): session closed for user core Jan 29 11:35:19.368821 systemd[1]: sshd@3-172.31.25.200:22-139.178.68.195:33438.service: Deactivated successfully. Jan 29 11:35:19.374203 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:35:19.378071 systemd-logind[1866]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:35:19.381651 systemd-logind[1866]: Removed session 4. Jan 29 11:35:19.407936 systemd[1]: Started sshd@4-172.31.25.200:22-139.178.68.195:33440.service - OpenSSH per-connection server daemon (139.178.68.195:33440). Jan 29 11:35:19.617167 sshd[2162]: Accepted publickey for core from 139.178.68.195 port 33440 ssh2: RSA SHA256:ucroplBE8DfUKARYUXLTx/d82p9pINzh/nPi5nZglro Jan 29 11:35:19.619314 sshd-session[2162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:35:19.636869 systemd-logind[1866]: New session 5 of user core. Jan 29 11:35:19.644719 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:35:19.777891 sshd[2164]: Connection closed by 139.178.68.195 port 33440 Jan 29 11:35:19.778568 sshd-session[2162]: pam_unix(sshd:session): session closed for user core Jan 29 11:35:19.787495 systemd-logind[1866]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:35:19.789415 systemd[1]: sshd@4-172.31.25.200:22-139.178.68.195:33440.service: Deactivated successfully. Jan 29 11:35:19.794267 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:35:19.799218 systemd-logind[1866]: Removed session 5. Jan 29 11:35:19.827877 systemd[1]: Started sshd@5-172.31.25.200:22-139.178.68.195:33454.service - OpenSSH per-connection server daemon (139.178.68.195:33454). Jan 29 11:35:20.027027 sshd[2169]: Accepted publickey for core from 139.178.68.195 port 33454 ssh2: RSA SHA256:ucroplBE8DfUKARYUXLTx/d82p9pINzh/nPi5nZglro Jan 29 11:35:20.027183 sshd-session[2169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:35:20.034235 systemd-logind[1866]: New session 6 of user core. Jan 29 11:35:20.043670 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:35:20.152391 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:35:20.159304 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:35:20.175682 sshd[2171]: Connection closed by 139.178.68.195 port 33454 Jan 29 11:35:20.177529 sshd-session[2169]: pam_unix(sshd:session): session closed for user core Jan 29 11:35:20.182011 systemd[1]: sshd@5-172.31.25.200:22-139.178.68.195:33454.service: Deactivated successfully. Jan 29 11:35:20.187998 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:35:20.189689 systemd-logind[1866]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:35:20.191803 systemd-logind[1866]: Removed session 6. Jan 29 11:35:20.216739 systemd[1]: Started sshd@6-172.31.25.200:22-139.178.68.195:33456.service - OpenSSH per-connection server daemon (139.178.68.195:33456). Jan 29 11:35:20.376307 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:35:20.385788 (kubelet)[2186]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:35:20.409553 sshd[2179]: Accepted publickey for core from 139.178.68.195 port 33456 ssh2: RSA SHA256:ucroplBE8DfUKARYUXLTx/d82p9pINzh/nPi5nZglro Jan 29 11:35:20.412441 sshd-session[2179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:35:20.421768 systemd-logind[1866]: New session 7 of user core. Jan 29 11:35:20.428936 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:35:20.443182 kubelet[2186]: E0129 11:35:20.443132 2186 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:35:20.447519 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:35:20.447814 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:35:20.552326 sudo[2194]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:35:20.552755 sudo[2194]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:35:20.593158 sudo[2194]: pam_unix(sudo:session): session closed for user root Jan 29 11:35:20.617227 sshd[2192]: Connection closed by 139.178.68.195 port 33456 Jan 29 11:35:20.618707 sshd-session[2179]: pam_unix(sshd:session): session closed for user core Jan 29 11:35:20.628017 systemd[1]: sshd@6-172.31.25.200:22-139.178.68.195:33456.service: Deactivated successfully. Jan 29 11:35:20.631883 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:35:20.634272 systemd-logind[1866]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:35:20.636238 systemd-logind[1866]: Removed session 7. Jan 29 11:35:20.667631 systemd[1]: Started sshd@7-172.31.25.200:22-139.178.68.195:33468.service - OpenSSH per-connection server daemon (139.178.68.195:33468). Jan 29 11:35:20.855528 sshd[2199]: Accepted publickey for core from 139.178.68.195 port 33468 ssh2: RSA SHA256:ucroplBE8DfUKARYUXLTx/d82p9pINzh/nPi5nZglro Jan 29 11:35:20.859157 sshd-session[2199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:35:20.866495 systemd-logind[1866]: New session 8 of user core. Jan 29 11:35:20.872586 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:35:20.977587 sudo[2203]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:35:20.981039 sudo[2203]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:35:20.986821 sudo[2203]: pam_unix(sudo:session): session closed for user root Jan 29 11:35:20.997675 sudo[2202]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 11:35:21.002418 sudo[2202]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:35:21.018662 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:35:21.138170 augenrules[2225]: No rules Jan 29 11:35:21.140370 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:35:21.140630 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:35:21.144002 sudo[2202]: pam_unix(sudo:session): session closed for user root Jan 29 11:35:21.168035 sshd[2201]: Connection closed by 139.178.68.195 port 33468 Jan 29 11:35:21.171404 sshd-session[2199]: pam_unix(sshd:session): session closed for user core Jan 29 11:35:21.177617 systemd[1]: sshd@7-172.31.25.200:22-139.178.68.195:33468.service: Deactivated successfully. Jan 29 11:35:21.179348 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:35:21.182622 systemd-logind[1866]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:35:21.183763 systemd-logind[1866]: Removed session 8. Jan 29 11:35:21.208840 systemd[1]: Started sshd@8-172.31.25.200:22-139.178.68.195:33474.service - OpenSSH per-connection server daemon (139.178.68.195:33474). Jan 29 11:35:21.410830 sshd[2233]: Accepted publickey for core from 139.178.68.195 port 33474 ssh2: RSA SHA256:ucroplBE8DfUKARYUXLTx/d82p9pINzh/nPi5nZglro Jan 29 11:35:21.410746 sshd-session[2233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:35:21.431361 systemd-logind[1866]: New session 9 of user core. Jan 29 11:35:21.435732 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 11:35:21.559117 sudo[2236]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:35:21.559576 sudo[2236]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:35:22.989701 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:35:22.998246 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:35:23.104864 systemd[1]: Reloading requested from client PID 2269 ('systemctl') (unit session-9.scope)... Jan 29 11:35:23.104886 systemd[1]: Reloading... Jan 29 11:35:23.265412 zram_generator::config[2312]: No configuration found. Jan 29 11:35:23.410325 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:35:23.523022 systemd[1]: Reloading finished in 417 ms. Jan 29 11:35:23.603770 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:35:23.607299 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:35:23.609958 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:35:23.610218 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:35:23.615782 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:35:23.850845 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:35:23.866446 (kubelet)[2371]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:35:23.923558 kubelet[2371]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:35:23.924007 kubelet[2371]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:35:23.924007 kubelet[2371]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:35:23.927416 kubelet[2371]: I0129 11:35:23.925398 2371 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:35:24.528209 kubelet[2371]: I0129 11:35:24.528161 2371 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 11:35:24.528209 kubelet[2371]: I0129 11:35:24.528194 2371 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:35:24.528728 kubelet[2371]: I0129 11:35:24.528701 2371 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 11:35:24.593422 kubelet[2371]: I0129 11:35:24.592793 2371 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:35:24.618943 kubelet[2371]: E0129 11:35:24.618906 2371 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:35:24.619120 kubelet[2371]: I0129 11:35:24.619099 2371 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:35:24.625893 kubelet[2371]: I0129 11:35:24.625853 2371 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:35:24.626105 kubelet[2371]: I0129 11:35:24.626004 2371 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 11:35:24.626201 kubelet[2371]: I0129 11:35:24.626162 2371 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:35:24.626406 kubelet[2371]: I0129 11:35:24.626201 2371 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.25.200","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:35:24.626580 kubelet[2371]: I0129 11:35:24.626413 2371 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:35:24.626580 kubelet[2371]: I0129 11:35:24.626429 2371 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 11:35:24.626580 kubelet[2371]: I0129 11:35:24.626547 2371 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:35:24.628427 kubelet[2371]: I0129 11:35:24.628405 2371 kubelet.go:408] "Attempting to sync node with API server" Jan 29 11:35:24.630010 kubelet[2371]: I0129 11:35:24.628747 2371 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:35:24.630010 kubelet[2371]: I0129 11:35:24.628795 2371 kubelet.go:314] "Adding apiserver pod source" Jan 29 11:35:24.630010 kubelet[2371]: I0129 11:35:24.628814 2371 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:35:24.630238 kubelet[2371]: E0129 11:35:24.630208 2371 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:35:24.630287 kubelet[2371]: E0129 11:35:24.630261 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:35:24.636649 kubelet[2371]: I0129 11:35:24.636617 2371 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:35:24.639177 kubelet[2371]: I0129 11:35:24.639150 2371 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:35:24.639311 kubelet[2371]: W0129 11:35:24.639236 2371 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:35:24.640437 kubelet[2371]: I0129 11:35:24.640155 2371 server.go:1269] "Started kubelet" Jan 29 11:35:24.641908 kubelet[2371]: I0129 11:35:24.641881 2371 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:35:24.652603 kubelet[2371]: I0129 11:35:24.652469 2371 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:35:24.654318 kubelet[2371]: I0129 11:35:24.653948 2371 server.go:460] "Adding debug handlers to kubelet server" Jan 29 11:35:24.655242 kubelet[2371]: I0129 11:35:24.655182 2371 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:35:24.655679 kubelet[2371]: I0129 11:35:24.655649 2371 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:35:24.656068 kubelet[2371]: I0129 11:35:24.656042 2371 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:35:24.663032 kubelet[2371]: I0129 11:35:24.662086 2371 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 11:35:24.663032 kubelet[2371]: E0129 11:35:24.662547 2371 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.25.200\" not found" Jan 29 11:35:24.663433 kubelet[2371]: I0129 11:35:24.663199 2371 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 11:35:24.663433 kubelet[2371]: I0129 11:35:24.663256 2371 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:35:24.666991 kubelet[2371]: I0129 11:35:24.665642 2371 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:35:24.666991 kubelet[2371]: I0129 11:35:24.665820 2371 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:35:24.668279 kubelet[2371]: E0129 11:35:24.668241 2371 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.25.200\" not found" node="172.31.25.200" Jan 29 11:35:24.669172 kubelet[2371]: E0129 11:35:24.669151 2371 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:35:24.669387 kubelet[2371]: I0129 11:35:24.669215 2371 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:35:24.710483 kubelet[2371]: I0129 11:35:24.709122 2371 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:35:24.710483 kubelet[2371]: I0129 11:35:24.709146 2371 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:35:24.710483 kubelet[2371]: I0129 11:35:24.709169 2371 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:35:24.711950 kubelet[2371]: I0129 11:35:24.711931 2371 policy_none.go:49] "None policy: Start" Jan 29 11:35:24.713304 kubelet[2371]: I0129 11:35:24.712904 2371 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:35:24.713304 kubelet[2371]: I0129 11:35:24.712936 2371 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:35:24.727284 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 11:35:24.745403 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 11:35:24.752263 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 11:35:24.762350 kubelet[2371]: I0129 11:35:24.760775 2371 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:35:24.762350 kubelet[2371]: I0129 11:35:24.761147 2371 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:35:24.762350 kubelet[2371]: I0129 11:35:24.761162 2371 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:35:24.762350 kubelet[2371]: I0129 11:35:24.761946 2371 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:35:24.768248 kubelet[2371]: E0129 11:35:24.768215 2371 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.25.200\" not found" Jan 29 11:35:24.790637 kubelet[2371]: I0129 11:35:24.787936 2371 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:35:24.793012 kubelet[2371]: I0129 11:35:24.792983 2371 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:35:24.793129 kubelet[2371]: I0129 11:35:24.793021 2371 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:35:24.793129 kubelet[2371]: I0129 11:35:24.793043 2371 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 11:35:24.793249 kubelet[2371]: E0129 11:35:24.793161 2371 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 29 11:35:24.862444 kubelet[2371]: I0129 11:35:24.862407 2371 kubelet_node_status.go:72] "Attempting to register node" node="172.31.25.200" Jan 29 11:35:24.870143 kubelet[2371]: I0129 11:35:24.869366 2371 kubelet_node_status.go:75] "Successfully registered node" node="172.31.25.200" Jan 29 11:35:24.870143 kubelet[2371]: E0129 11:35:24.869424 2371 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.25.200\": node \"172.31.25.200\" not found" Jan 29 11:35:24.902631 kubelet[2371]: I0129 11:35:24.902560 2371 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 29 11:35:24.903894 containerd[1891]: time="2025-01-29T11:35:24.903451641Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:35:24.904333 kubelet[2371]: I0129 11:35:24.903730 2371 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 29 11:35:24.925338 sudo[2236]: pam_unix(sudo:session): session closed for user root Jan 29 11:35:24.948915 sshd[2235]: Connection closed by 139.178.68.195 port 33474 Jan 29 11:35:24.951451 sshd-session[2233]: pam_unix(sshd:session): session closed for user core Jan 29 11:35:24.968988 systemd[1]: sshd@8-172.31.25.200:22-139.178.68.195:33474.service: Deactivated successfully. Jan 29 11:35:24.972963 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 11:35:24.977596 systemd-logind[1866]: Session 9 logged out. Waiting for processes to exit. Jan 29 11:35:24.981850 systemd-logind[1866]: Removed session 9. Jan 29 11:35:25.530824 kubelet[2371]: I0129 11:35:25.530774 2371 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 11:35:25.531292 kubelet[2371]: W0129 11:35:25.531000 2371 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 11:35:25.531292 kubelet[2371]: W0129 11:35:25.531285 2371 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 11:35:25.531510 kubelet[2371]: W0129 11:35:25.531329 2371 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 11:35:25.630770 kubelet[2371]: E0129 11:35:25.630463 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:35:25.630770 kubelet[2371]: I0129 11:35:25.630477 2371 apiserver.go:52] "Watching apiserver" Jan 29 11:35:25.658552 systemd[1]: Created slice kubepods-besteffort-pod2bde0190_16b0_49aa_8dae_03a4216f76fd.slice - libcontainer container kubepods-besteffort-pod2bde0190_16b0_49aa_8dae_03a4216f76fd.slice. Jan 29 11:35:25.663716 kubelet[2371]: I0129 11:35:25.663691 2371 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 11:35:25.667232 kubelet[2371]: I0129 11:35:25.667190 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-cilium-cgroup\") pod \"cilium-kxxrh\" (UID: \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\") " pod="kube-system/cilium-kxxrh" Jan 29 11:35:25.667361 kubelet[2371]: I0129 11:35:25.667229 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-etc-cni-netd\") pod \"cilium-kxxrh\" (UID: \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\") " pod="kube-system/cilium-kxxrh" Jan 29 11:35:25.667361 kubelet[2371]: I0129 11:35:25.667264 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-lib-modules\") pod \"cilium-kxxrh\" (UID: \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\") " pod="kube-system/cilium-kxxrh" Jan 29 11:35:25.667361 kubelet[2371]: I0129 11:35:25.667285 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-xtables-lock\") pod \"cilium-kxxrh\" (UID: \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\") " pod="kube-system/cilium-kxxrh" Jan 29 11:35:25.667361 kubelet[2371]: I0129 11:35:25.667305 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2bde0190-16b0-49aa-8dae-03a4216f76fd-xtables-lock\") pod \"kube-proxy-r6d99\" (UID: \"2bde0190-16b0-49aa-8dae-03a4216f76fd\") " pod="kube-system/kube-proxy-r6d99" Jan 29 11:35:25.667361 kubelet[2371]: I0129 11:35:25.667327 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-cilium-run\") pod \"cilium-kxxrh\" (UID: \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\") " pod="kube-system/cilium-kxxrh" Jan 29 11:35:25.667361 kubelet[2371]: I0129 11:35:25.667347 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-bpf-maps\") pod \"cilium-kxxrh\" (UID: \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\") " pod="kube-system/cilium-kxxrh" Jan 29 11:35:25.667933 kubelet[2371]: I0129 11:35:25.667402 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-cni-path\") pod \"cilium-kxxrh\" (UID: \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\") " pod="kube-system/cilium-kxxrh" Jan 29 11:35:25.667933 kubelet[2371]: I0129 11:35:25.667429 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-host-proc-sys-net\") pod \"cilium-kxxrh\" (UID: \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\") " pod="kube-system/cilium-kxxrh" Jan 29 11:35:25.667933 kubelet[2371]: I0129 11:35:25.667455 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-host-proc-sys-kernel\") pod \"cilium-kxxrh\" (UID: \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\") " pod="kube-system/cilium-kxxrh" Jan 29 11:35:25.667933 kubelet[2371]: I0129 11:35:25.667477 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2bde0190-16b0-49aa-8dae-03a4216f76fd-kube-proxy\") pod \"kube-proxy-r6d99\" (UID: \"2bde0190-16b0-49aa-8dae-03a4216f76fd\") " pod="kube-system/kube-proxy-r6d99" Jan 29 11:35:25.667933 kubelet[2371]: I0129 11:35:25.667497 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2bde0190-16b0-49aa-8dae-03a4216f76fd-lib-modules\") pod \"kube-proxy-r6d99\" (UID: \"2bde0190-16b0-49aa-8dae-03a4216f76fd\") " pod="kube-system/kube-proxy-r6d99" Jan 29 11:35:25.671956 kubelet[2371]: I0129 11:35:25.667526 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc5l5\" (UniqueName: \"kubernetes.io/projected/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-kube-api-access-gc5l5\") pod \"cilium-kxxrh\" (UID: \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\") " pod="kube-system/cilium-kxxrh" Jan 29 11:35:25.671956 kubelet[2371]: I0129 11:35:25.667551 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-hostproc\") pod \"cilium-kxxrh\" (UID: \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\") " pod="kube-system/cilium-kxxrh" Jan 29 11:35:25.671956 kubelet[2371]: I0129 11:35:25.667853 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-clustermesh-secrets\") pod \"cilium-kxxrh\" (UID: \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\") " pod="kube-system/cilium-kxxrh" Jan 29 11:35:25.671956 kubelet[2371]: I0129 11:35:25.667918 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-cilium-config-path\") pod \"cilium-kxxrh\" (UID: \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\") " pod="kube-system/cilium-kxxrh" Jan 29 11:35:25.671956 kubelet[2371]: I0129 11:35:25.667941 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-hubble-tls\") pod \"cilium-kxxrh\" (UID: \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\") " pod="kube-system/cilium-kxxrh" Jan 29 11:35:25.672220 kubelet[2371]: I0129 11:35:25.667970 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6gng\" (UniqueName: \"kubernetes.io/projected/2bde0190-16b0-49aa-8dae-03a4216f76fd-kube-api-access-n6gng\") pod \"kube-proxy-r6d99\" (UID: \"2bde0190-16b0-49aa-8dae-03a4216f76fd\") " pod="kube-system/kube-proxy-r6d99" Jan 29 11:35:25.688025 systemd[1]: Created slice kubepods-burstable-pod2bce1cde_c6ac_4931_a09b_22d2bcd4ad21.slice - libcontainer container kubepods-burstable-pod2bce1cde_c6ac_4931_a09b_22d2bcd4ad21.slice. Jan 29 11:35:25.972958 containerd[1891]: time="2025-01-29T11:35:25.972736494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r6d99,Uid:2bde0190-16b0-49aa-8dae-03a4216f76fd,Namespace:kube-system,Attempt:0,}" Jan 29 11:35:26.000638 containerd[1891]: time="2025-01-29T11:35:26.000549335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kxxrh,Uid:2bce1cde-c6ac-4931-a09b-22d2bcd4ad21,Namespace:kube-system,Attempt:0,}" Jan 29 11:35:26.612175 containerd[1891]: time="2025-01-29T11:35:26.612121705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:35:26.614047 containerd[1891]: time="2025-01-29T11:35:26.613999828Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:35:26.615032 containerd[1891]: time="2025-01-29T11:35:26.614981705Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 11:35:26.615916 containerd[1891]: time="2025-01-29T11:35:26.615883692Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:35:26.617431 containerd[1891]: time="2025-01-29T11:35:26.616706202Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:35:26.620850 containerd[1891]: time="2025-01-29T11:35:26.620779740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:35:26.623505 containerd[1891]: time="2025-01-29T11:35:26.622259491Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 648.431869ms" Jan 29 11:35:26.623505 containerd[1891]: time="2025-01-29T11:35:26.623238121Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 622.338907ms" Jan 29 11:35:26.631212 kubelet[2371]: E0129 11:35:26.631137 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:35:26.759051 containerd[1891]: time="2025-01-29T11:35:26.758591077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:35:26.760999 containerd[1891]: time="2025-01-29T11:35:26.759388803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:35:26.760999 containerd[1891]: time="2025-01-29T11:35:26.759422626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:35:26.760999 containerd[1891]: time="2025-01-29T11:35:26.759524143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:35:26.763066 containerd[1891]: time="2025-01-29T11:35:26.762752301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:35:26.763066 containerd[1891]: time="2025-01-29T11:35:26.762833287Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:35:26.763066 containerd[1891]: time="2025-01-29T11:35:26.762852609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:35:26.765530 containerd[1891]: time="2025-01-29T11:35:26.765392565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:35:26.791216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount543669128.mount: Deactivated successfully. Jan 29 11:35:26.873925 systemd[1]: run-containerd-runc-k8s.io-05e8268e00942ac972a3a11a0a4fbea2cb2cd9cf43ad6c0b91559961961ac32d-runc.v7YAR4.mount: Deactivated successfully. Jan 29 11:35:26.883978 systemd[1]: Started cri-containerd-7430f7a034548d0b27a6d6b18e5952de8c05ce3a76c76da21294a7fe350d2d34.scope - libcontainer container 7430f7a034548d0b27a6d6b18e5952de8c05ce3a76c76da21294a7fe350d2d34. Jan 29 11:35:26.892741 systemd[1]: Started cri-containerd-05e8268e00942ac972a3a11a0a4fbea2cb2cd9cf43ad6c0b91559961961ac32d.scope - libcontainer container 05e8268e00942ac972a3a11a0a4fbea2cb2cd9cf43ad6c0b91559961961ac32d. Jan 29 11:35:26.942108 containerd[1891]: time="2025-01-29T11:35:26.941725544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kxxrh,Uid:2bce1cde-c6ac-4931-a09b-22d2bcd4ad21,Namespace:kube-system,Attempt:0,} returns sandbox id \"05e8268e00942ac972a3a11a0a4fbea2cb2cd9cf43ad6c0b91559961961ac32d\"" Jan 29 11:35:26.945333 containerd[1891]: time="2025-01-29T11:35:26.945215489Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 11:35:26.945632 containerd[1891]: time="2025-01-29T11:35:26.945436742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r6d99,Uid:2bde0190-16b0-49aa-8dae-03a4216f76fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"7430f7a034548d0b27a6d6b18e5952de8c05ce3a76c76da21294a7fe350d2d34\"" Jan 29 11:35:27.632130 kubelet[2371]: E0129 11:35:27.632070 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:35:28.633084 kubelet[2371]: E0129 11:35:28.633013 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:35:29.633792 kubelet[2371]: E0129 11:35:29.633731 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:35:30.633969 kubelet[2371]: E0129 11:35:30.633860 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:35:31.634753 kubelet[2371]: E0129 11:35:31.634667 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:35:32.634852 kubelet[2371]: E0129 11:35:32.634803 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:35:32.969157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount84729752.mount: Deactivated successfully. Jan 29 11:35:33.636924 kubelet[2371]: E0129 11:35:33.636655 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:35:34.637455 kubelet[2371]: E0129 11:35:34.637369 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:35:35.637940 kubelet[2371]: E0129 11:35:35.637822 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:35:35.891545 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 29 11:35:36.188307 containerd[1891]: time="2025-01-29T11:35:36.188176349Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:36.190388 containerd[1891]: time="2025-01-29T11:35:36.190122017Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 29 11:35:36.192501 containerd[1891]: time="2025-01-29T11:35:36.192460939Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:36.195691 containerd[1891]: time="2025-01-29T11:35:36.194580959Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.24929102s" Jan 29 11:35:36.195691 containerd[1891]: time="2025-01-29T11:35:36.194628809Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 29 11:35:36.197754 containerd[1891]: time="2025-01-29T11:35:36.197718103Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 29 11:35:36.199067 containerd[1891]: time="2025-01-29T11:35:36.199031632Z" level=info msg="CreateContainer within sandbox \"05e8268e00942ac972a3a11a0a4fbea2cb2cd9cf43ad6c0b91559961961ac32d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 11:35:36.227751 containerd[1891]: time="2025-01-29T11:35:36.227695331Z" level=info msg="CreateContainer within sandbox \"05e8268e00942ac972a3a11a0a4fbea2cb2cd9cf43ad6c0b91559961961ac32d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bb647d2f8bae09b04b9bdd4f7c439ff14cc68ac59b13b74b21fdbfbd5c649984\"" Jan 29 11:35:36.229116 containerd[1891]: time="2025-01-29T11:35:36.229080260Z" level=info msg="StartContainer for \"bb647d2f8bae09b04b9bdd4f7c439ff14cc68ac59b13b74b21fdbfbd5c649984\"" Jan 29 11:35:36.295797 systemd[1]: Started cri-containerd-bb647d2f8bae09b04b9bdd4f7c439ff14cc68ac59b13b74b21fdbfbd5c649984.scope - libcontainer container bb647d2f8bae09b04b9bdd4f7c439ff14cc68ac59b13b74b21fdbfbd5c649984. Jan 29 11:35:36.380891 containerd[1891]: time="2025-01-29T11:35:36.380103222Z" level=info msg="StartContainer for \"bb647d2f8bae09b04b9bdd4f7c439ff14cc68ac59b13b74b21fdbfbd5c649984\" returns successfully" Jan 29 11:35:36.394977 systemd[1]: cri-containerd-bb647d2f8bae09b04b9bdd4f7c439ff14cc68ac59b13b74b21fdbfbd5c649984.scope: Deactivated successfully. Jan 29 11:35:36.594999 containerd[1891]: time="2025-01-29T11:35:36.594700928Z" level=info msg="shim disconnected" id=bb647d2f8bae09b04b9bdd4f7c439ff14cc68ac59b13b74b21fdbfbd5c649984 namespace=k8s.io Jan 29 11:35:36.594999 containerd[1891]: time="2025-01-29T11:35:36.594769798Z" level=warning msg="cleaning up after shim disconnected" id=bb647d2f8bae09b04b9bdd4f7c439ff14cc68ac59b13b74b21fdbfbd5c649984 namespace=k8s.io Jan 29 11:35:36.594999 containerd[1891]: time="2025-01-29T11:35:36.594781878Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:35:36.639062 kubelet[2371]: E0129 11:35:36.639017 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:35:36.857314 containerd[1891]: time="2025-01-29T11:35:36.857067501Z" level=info msg="CreateContainer within sandbox \"05e8268e00942ac972a3a11a0a4fbea2cb2cd9cf43ad6c0b91559961961ac32d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 11:35:36.945732 containerd[1891]: time="2025-01-29T11:35:36.945673382Z" level=info msg="CreateContainer within sandbox \"05e8268e00942ac972a3a11a0a4fbea2cb2cd9cf43ad6c0b91559961961ac32d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"110baf55fc99da99cefc62bd87ff1a420906ffda3ad9d88450ccb9fbeba4a9a2\"" Jan 29 11:35:36.954502 containerd[1891]: time="2025-01-29T11:35:36.954430903Z" level=info msg="StartContainer for \"110baf55fc99da99cefc62bd87ff1a420906ffda3ad9d88450ccb9fbeba4a9a2\"" Jan 29 11:35:37.066866 systemd[1]: Started cri-containerd-110baf55fc99da99cefc62bd87ff1a420906ffda3ad9d88450ccb9fbeba4a9a2.scope - libcontainer container 110baf55fc99da99cefc62bd87ff1a420906ffda3ad9d88450ccb9fbeba4a9a2. Jan 29 11:35:37.143431 containerd[1891]: time="2025-01-29T11:35:37.143296572Z" level=info msg="StartContainer for \"110baf55fc99da99cefc62bd87ff1a420906ffda3ad9d88450ccb9fbeba4a9a2\" returns successfully" Jan 29 11:35:37.175172 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:35:37.175656 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:35:37.175754 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:35:37.188208 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:35:37.188784 systemd[1]: cri-containerd-110baf55fc99da99cefc62bd87ff1a420906ffda3ad9d88450ccb9fbeba4a9a2.scope: Deactivated successfully. Jan 29 11:35:37.219673 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb647d2f8bae09b04b9bdd4f7c439ff14cc68ac59b13b74b21fdbfbd5c649984-rootfs.mount: Deactivated successfully. Jan 29 11:35:37.230699 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:35:37.256058 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-110baf55fc99da99cefc62bd87ff1a420906ffda3ad9d88450ccb9fbeba4a9a2-rootfs.mount: Deactivated successfully. Jan 29 11:35:37.277272 containerd[1891]: time="2025-01-29T11:35:37.277047243Z" level=info msg="shim disconnected" id=110baf55fc99da99cefc62bd87ff1a420906ffda3ad9d88450ccb9fbeba4a9a2 namespace=k8s.io Jan 29 11:35:37.277272 containerd[1891]: time="2025-01-29T11:35:37.277109553Z" level=warning msg="cleaning up after shim disconnected" id=110baf55fc99da99cefc62bd87ff1a420906ffda3ad9d88450ccb9fbeba4a9a2 namespace=k8s.io Jan 29 11:35:37.277272 containerd[1891]: time="2025-01-29T11:35:37.277122306Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:35:37.640229 kubelet[2371]: E0129 11:35:37.640048 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:35:37.817264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4256151673.mount: Deactivated successfully. Jan 29 11:35:37.869572 containerd[1891]: time="2025-01-29T11:35:37.869507945Z" level=info msg="CreateContainer within sandbox \"05e8268e00942ac972a3a11a0a4fbea2cb2cd9cf43ad6c0b91559961961ac32d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 11:35:37.900467 containerd[1891]: time="2025-01-29T11:35:37.898367949Z" level=info msg="CreateContainer within sandbox \"05e8268e00942ac972a3a11a0a4fbea2cb2cd9cf43ad6c0b91559961961ac32d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8b741616e6088a008ee467ec7d3c3178b9556bd4e47af35d3c8036de8b4decb5\"" Jan 29 11:35:37.900467 containerd[1891]: time="2025-01-29T11:35:37.899340973Z" level=info msg="StartContainer for \"8b741616e6088a008ee467ec7d3c3178b9556bd4e47af35d3c8036de8b4decb5\"" Jan 29 11:35:37.949644 systemd[1]: Started cri-containerd-8b741616e6088a008ee467ec7d3c3178b9556bd4e47af35d3c8036de8b4decb5.scope - libcontainer container 8b741616e6088a008ee467ec7d3c3178b9556bd4e47af35d3c8036de8b4decb5. Jan 29 11:35:38.067305 systemd[1]: cri-containerd-8b741616e6088a008ee467ec7d3c3178b9556bd4e47af35d3c8036de8b4decb5.scope: Deactivated successfully. Jan 29 11:35:38.071184 containerd[1891]: time="2025-01-29T11:35:38.071141062Z" level=info msg="StartContainer for \"8b741616e6088a008ee467ec7d3c3178b9556bd4e47af35d3c8036de8b4decb5\" returns successfully" Jan 29 11:35:38.226158 containerd[1891]: time="2025-01-29T11:35:38.225804523Z" level=info msg="shim disconnected" id=8b741616e6088a008ee467ec7d3c3178b9556bd4e47af35d3c8036de8b4decb5 namespace=k8s.io Jan 29 11:35:38.226158 containerd[1891]: time="2025-01-29T11:35:38.225976241Z" level=warning msg="cleaning up after shim disconnected" id=8b741616e6088a008ee467ec7d3c3178b9556bd4e47af35d3c8036de8b4decb5 namespace=k8s.io Jan 29 11:35:38.226158 containerd[1891]: time="2025-01-29T11:35:38.226099993Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:35:38.641242 kubelet[2371]: E0129 11:35:38.641005 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:35:38.883424 containerd[1891]: time="2025-01-29T11:35:38.879910345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:38.883424 containerd[1891]: time="2025-01-29T11:35:38.882659631Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231128" Jan 29 11:35:38.883931 containerd[1891]: time="2025-01-29T11:35:38.883882906Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:38.899509 containerd[1891]: time="2025-01-29T11:35:38.899174387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:38.900301 containerd[1891]: time="2025-01-29T11:35:38.900265579Z" level=info msg="CreateContainer within sandbox \"05e8268e00942ac972a3a11a0a4fbea2cb2cd9cf43ad6c0b91559961961ac32d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 11:35:38.904685 containerd[1891]: time="2025-01-29T11:35:38.903814853Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 2.705339352s" Jan 29 11:35:38.906042 containerd[1891]: time="2025-01-29T11:35:38.906001305Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 29 11:35:38.923225 containerd[1891]: time="2025-01-29T11:35:38.921165910Z" level=info msg="CreateContainer within sandbox \"7430f7a034548d0b27a6d6b18e5952de8c05ce3a76c76da21294a7fe350d2d34\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:35:38.923585 containerd[1891]: time="2025-01-29T11:35:38.923548233Z" level=info msg="CreateContainer within sandbox \"05e8268e00942ac972a3a11a0a4fbea2cb2cd9cf43ad6c0b91559961961ac32d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"79df8a6e7dd109abe58429c05590f4f6eb33ac4aa79e44ce2050492c35576caa\"" Jan 29 11:35:38.924038 containerd[1891]: time="2025-01-29T11:35:38.924009797Z" level=info msg="StartContainer for \"79df8a6e7dd109abe58429c05590f4f6eb33ac4aa79e44ce2050492c35576caa\"" Jan 29 11:35:38.960091 containerd[1891]: time="2025-01-29T11:35:38.960041757Z" level=info msg="CreateContainer within sandbox \"7430f7a034548d0b27a6d6b18e5952de8c05ce3a76c76da21294a7fe350d2d34\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6515fb9db16c53a00d455ace079cbab4e03b3f242ee6adab83eca85cb1d504c1\"" Jan 29 11:35:38.961117 containerd[1891]: time="2025-01-29T11:35:38.961087690Z" level=info msg="StartContainer for \"6515fb9db16c53a00d455ace079cbab4e03b3f242ee6adab83eca85cb1d504c1\"" Jan 29 11:35:38.993611 systemd[1]: Started cri-containerd-79df8a6e7dd109abe58429c05590f4f6eb33ac4aa79e44ce2050492c35576caa.scope - libcontainer container 79df8a6e7dd109abe58429c05590f4f6eb33ac4aa79e44ce2050492c35576caa. Jan 29 11:35:39.025602 systemd[1]: Started cri-containerd-6515fb9db16c53a00d455ace079cbab4e03b3f242ee6adab83eca85cb1d504c1.scope - libcontainer container 6515fb9db16c53a00d455ace079cbab4e03b3f242ee6adab83eca85cb1d504c1. Jan 29 11:35:39.064462 systemd[1]: cri-containerd-79df8a6e7dd109abe58429c05590f4f6eb33ac4aa79e44ce2050492c35576caa.scope: Deactivated successfully. Jan 29 11:35:39.069915 containerd[1891]: time="2025-01-29T11:35:39.069854095Z" level=info msg="StartContainer for \"79df8a6e7dd109abe58429c05590f4f6eb33ac4aa79e44ce2050492c35576caa\" returns successfully" Jan 29 11:35:39.106675 containerd[1891]: time="2025-01-29T11:35:39.106549078Z" level=info msg="StartContainer for \"6515fb9db16c53a00d455ace079cbab4e03b3f242ee6adab83eca85cb1d504c1\" returns successfully" Jan 29 11:35:39.183006 containerd[1891]: time="2025-01-29T11:35:39.182808336Z" level=info msg="shim disconnected" id=79df8a6e7dd109abe58429c05590f4f6eb33ac4aa79e44ce2050492c35576caa namespace=k8s.io Jan 29 11:35:39.183006 containerd[1891]: time="2025-01-29T11:35:39.182875640Z" level=warning msg="cleaning up after shim disconnected" id=79df8a6e7dd109abe58429c05590f4f6eb33ac4aa79e44ce2050492c35576caa namespace=k8s.io Jan 29 11:35:39.183006 containerd[1891]: time="2025-01-29T11:35:39.182890664Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:35:39.641334 kubelet[2371]: E0129 11:35:39.641212 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:35:39.913746 containerd[1891]: time="2025-01-29T11:35:39.913638208Z" level=info msg="CreateContainer within sandbox \"05e8268e00942ac972a3a11a0a4fbea2cb2cd9cf43ad6c0b91559961961ac32d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 11:35:39.923449 kubelet[2371]: I0129 11:35:39.923388 2371 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-r6d99" podStartSLOduration=3.951609373 podStartE2EDuration="15.923360833s" podCreationTimestamp="2025-01-29 11:35:24 +0000 UTC" firstStartedPulling="2025-01-29 11:35:26.947171705 +0000 UTC m=+3.070267474" lastFinishedPulling="2025-01-29 11:35:38.918923171 +0000 UTC m=+15.042018934" observedRunningTime="2025-01-29 11:35:39.923124995 +0000 UTC m=+16.046220775" watchObservedRunningTime="2025-01-29 11:35:39.923360833 +0000 UTC m=+16.046456609" Jan 29 11:35:39.943528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount964696460.mount: Deactivated successfully. Jan 29 11:35:39.953463 containerd[1891]: time="2025-01-29T11:35:39.953413450Z" level=info msg="CreateContainer within sandbox \"05e8268e00942ac972a3a11a0a4fbea2cb2cd9cf43ad6c0b91559961961ac32d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b114b11889e7959069f567bd0cbefe80d82e7ce69303fa8fb86217585a03f96f\"" Jan 29 11:35:39.954349 containerd[1891]: time="2025-01-29T11:35:39.954306612Z" level=info msg="StartContainer for \"b114b11889e7959069f567bd0cbefe80d82e7ce69303fa8fb86217585a03f96f\"" Jan 29 11:35:40.002407 systemd[1]: Started cri-containerd-b114b11889e7959069f567bd0cbefe80d82e7ce69303fa8fb86217585a03f96f.scope - libcontainer container b114b11889e7959069f567bd0cbefe80d82e7ce69303fa8fb86217585a03f96f. Jan 29 11:35:40.053212 containerd[1891]: time="2025-01-29T11:35:40.052635919Z" level=info msg="StartContainer for \"b114b11889e7959069f567bd0cbefe80d82e7ce69303fa8fb86217585a03f96f\" returns successfully" Jan 29 11:35:40.254781 kubelet[2371]: I0129 11:35:40.254669 2371 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 29 11:35:40.612400 kernel: Initializing XFRM netlink socket Jan 29 11:35:40.642794 kubelet[2371]: E0129 11:35:40.642703 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:35:40.937323 kubelet[2371]: I0129 11:35:40.937130 2371 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kxxrh" podStartSLOduration=7.685299219 podStartE2EDuration="16.937108141s" podCreationTimestamp="2025-01-29 11:35:24 +0000 UTC" firstStartedPulling="2025-01-29 11:35:26.94461089 +0000 UTC m=+3.067706647" lastFinishedPulling="2025-01-29 11:35:36.196419813 +0000 UTC m=+12.319515569" observedRunningTime="2025-01-29 11:35:40.936222408 +0000 UTC m=+17.059318184" watchObservedRunningTime="2025-01-29 11:35:40.937108141 +0000 UTC m=+17.060204433" Jan 29 11:35:41.643324 kubelet[2371]: E0129 11:35:41.643261 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:35:42.356366 systemd-networkd[1734]: cilium_host: Link UP Jan 29 11:35:42.358218 (udev-worker)[3068]: Network interface NamePolicy= disabled on kernel command line. Jan 29 11:35:42.368869 systemd-networkd[1734]: cilium_net: Link UP Jan 29 11:35:42.370883 systemd-networkd[1734]: cilium_net: Gained carrier Jan 29 11:35:42.377404 systemd-networkd[1734]: cilium_host: Gained carrier Jan 29 11:35:42.382315 (udev-worker)[3069]: Network interface NamePolicy= disabled on kernel command line. Jan 29 11:35:42.393677 systemd-networkd[1734]: cilium_host: Gained IPv6LL Jan 29 11:35:42.644113 kubelet[2371]: E0129 11:35:42.643823 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:35:42.685912 (udev-worker)[3082]: Network interface NamePolicy= disabled on kernel command line. Jan 29 11:35:42.693178 systemd-networkd[1734]: cilium_vxlan: Link UP Jan 29 11:35:42.693187 systemd-networkd[1734]: cilium_vxlan: Gained carrier Jan 29 11:35:43.217846 kernel: NET: Registered PF_ALG protocol family Jan 29 11:35:43.258806 systemd-networkd[1734]: cilium_net: Gained IPv6LL Jan 29 11:35:43.476986 systemd[1]: Created slice kubepods-besteffort-pod9ead1f4f_2de7_4bed_8f16_f49d8379ce4e.slice - libcontainer container kubepods-besteffort-pod9ead1f4f_2de7_4bed_8f16_f49d8379ce4e.slice. Jan 29 11:35:43.642199 kubelet[2371]: I0129 11:35:43.642073 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqrhx\" (UniqueName: \"kubernetes.io/projected/9ead1f4f-2de7-4bed-8f16-f49d8379ce4e-kube-api-access-kqrhx\") pod \"nginx-deployment-8587fbcb89-zbq7h\" (UID: \"9ead1f4f-2de7-4bed-8f16-f49d8379ce4e\") " pod="default/nginx-deployment-8587fbcb89-zbq7h" Jan 29 11:35:43.646211 kubelet[2371]: E0129 11:35:43.646112 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:35:43.786814 containerd[1891]: time="2025-01-29T11:35:43.784860437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-zbq7h,Uid:9ead1f4f-2de7-4bed-8f16-f49d8379ce4e,Namespace:default,Attempt:0,}" Jan 29 11:35:44.025724 systemd-networkd[1734]: cilium_vxlan: Gained IPv6LL Jan 29 11:35:44.350089 systemd-networkd[1734]: lxc_health: Link UP Jan 29 11:35:44.357295 systemd-networkd[1734]: lxc_health: Gained carrier Jan 29 11:35:44.629951 kubelet[2371]: E0129 11:35:44.629820 2371 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:35:44.646513 kubelet[2371]: E0129 11:35:44.646451 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:35:44.870700 systemd-networkd[1734]: lxce062dfd49bf5: Link UP Jan 29 11:35:44.878411 kernel: eth0: renamed from tmp5b358 Jan 29 11:35:44.885593 systemd-networkd[1734]: lxce062dfd49bf5: Gained carrier Jan 29 11:35:45.497640 systemd-networkd[1734]: lxc_health: Gained IPv6LL Jan 29 11:35:45.646953 kubelet[2371]: E0129 11:35:45.646867 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:35:46.648310 kubelet[2371]: E0129 11:35:46.648228 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:35:46.777577 systemd-networkd[1734]: lxce062dfd49bf5: Gained IPv6LL Jan 29 11:35:47.650403 kubelet[2371]: E0129 11:35:47.649211 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:35:48.651788 kubelet[2371]: E0129 11:35:48.651725 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:35:48.998481 ntpd[1862]: Listen normally on 7 cilium_host 192.168.1.92:123 Jan 29 11:35:49.002107 ntpd[1862]: 29 Jan 11:35:48 ntpd[1862]: Listen normally on 7 cilium_host 192.168.1.92:123 Jan 29 11:35:49.002107 ntpd[1862]: 29 Jan 11:35:48 ntpd[1862]: Listen normally on 8 cilium_net [fe80::8884:54ff:fe5e:f9b9%3]:123 Jan 29 11:35:49.002107 ntpd[1862]: 29 Jan 11:35:48 ntpd[1862]: Listen normally on 9 cilium_host [fe80::ec73:88ff:fe42:f1a8%4]:123 Jan 29 11:35:49.002107 ntpd[1862]: 29 Jan 11:35:48 ntpd[1862]: Listen normally on 10 cilium_vxlan [fe80::742d:27ff:fe15:40a3%5]:123 Jan 29 11:35:49.002107 ntpd[1862]: 29 Jan 11:35:48 ntpd[1862]: Listen normally on 11 lxc_health [fe80::c0d0:28ff:fe87:bb12%7]:123 Jan 29 11:35:49.002107 ntpd[1862]: 29 Jan 11:35:48 ntpd[1862]: Listen normally on 12 lxce062dfd49bf5 [fe80::ec1c:3dff:fef0:4028%9]:123 Jan 29 11:35:48.998600 ntpd[1862]: Listen normally on 8 cilium_net [fe80::8884:54ff:fe5e:f9b9%3]:123 Jan 29 11:35:48.998655 ntpd[1862]: Listen normally on 9 cilium_host [fe80::ec73:88ff:fe42:f1a8%4]:123 Jan 29 11:35:48.998697 ntpd[1862]: Listen normally on 10 cilium_vxlan [fe80::742d:27ff:fe15:40a3%5]:123 Jan 29 11:35:48.998734 ntpd[1862]: Listen normally on 11 lxc_health [fe80::c0d0:28ff:fe87:bb12%7]:123 Jan 29 11:35:48.998771 ntpd[1862]: Listen normally on 12 lxce062dfd49bf5 [fe80::ec1c:3dff:fef0:4028%9]:123 Jan 29 11:35:49.652309 kubelet[2371]: E0129 11:35:49.652249 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:35:50.655196 kubelet[2371]: E0129 11:35:50.653864 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:35:50.688868 update_engine[1867]: I20250129 11:35:50.685436 1867 update_attempter.cc:509] Updating boot flags... Jan 29 11:35:50.724507 containerd[1891]: time="2025-01-29T11:35:50.720987498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:35:50.724507 containerd[1891]: time="2025-01-29T11:35:50.721070309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:35:50.724507 containerd[1891]: time="2025-01-29T11:35:50.721197915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:35:50.724507 containerd[1891]: time="2025-01-29T11:35:50.723445882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:35:50.802144 systemd[1]: run-containerd-runc-k8s.io-5b358eb5f7f11aa39dce4ffb7f36c0443f5eafd366b8812a4d3646df8d39480b-runc.FXgZRo.mount: Deactivated successfully. Jan 29 11:35:50.824919 systemd[1]: Started cri-containerd-5b358eb5f7f11aa39dce4ffb7f36c0443f5eafd366b8812a4d3646df8d39480b.scope - libcontainer container 5b358eb5f7f11aa39dce4ffb7f36c0443f5eafd366b8812a4d3646df8d39480b. Jan 29 11:35:50.872022 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3491) Jan 29 11:35:50.956609 containerd[1891]: time="2025-01-29T11:35:50.956463789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-zbq7h,Uid:9ead1f4f-2de7-4bed-8f16-f49d8379ce4e,Namespace:default,Attempt:0,} returns sandbox id \"5b358eb5f7f11aa39dce4ffb7f36c0443f5eafd366b8812a4d3646df8d39480b\"" Jan 29 11:35:50.962031 containerd[1891]: time="2025-01-29T11:35:50.960692721Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 11:35:51.093421 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3495) Jan 29 11:35:51.311399 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3495) Jan 29 11:35:51.656506 kubelet[2371]: E0129 11:35:51.656236 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:35:52.657896 kubelet[2371]: E0129 11:35:52.657826 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:35:53.658785 kubelet[2371]: E0129 11:35:53.658626 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:35:54.659219 kubelet[2371]: E0129 11:35:54.659176 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:35:54.673686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount920580335.mount: Deactivated successfully. Jan 29 11:35:55.659808 kubelet[2371]: E0129 11:35:55.659747 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:35:56.467670 containerd[1891]: time="2025-01-29T11:35:56.467611968Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:56.469709 containerd[1891]: time="2025-01-29T11:35:56.469498577Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71015561" Jan 29 11:35:56.472438 containerd[1891]: time="2025-01-29T11:35:56.472011050Z" level=info msg="ImageCreate event name:\"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:56.475961 containerd[1891]: time="2025-01-29T11:35:56.475917197Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:35:56.477780 containerd[1891]: time="2025-01-29T11:35:56.477343432Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 5.516499567s" Jan 29 11:35:56.477899 containerd[1891]: time="2025-01-29T11:35:56.477784503Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 29 11:35:56.480787 containerd[1891]: time="2025-01-29T11:35:56.480754918Z" level=info msg="CreateContainer within sandbox \"5b358eb5f7f11aa39dce4ffb7f36c0443f5eafd366b8812a4d3646df8d39480b\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 29 11:35:56.521269 containerd[1891]: time="2025-01-29T11:35:56.521217619Z" level=info msg="CreateContainer within sandbox \"5b358eb5f7f11aa39dce4ffb7f36c0443f5eafd366b8812a4d3646df8d39480b\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"cb3b66c692ad13094da9368b195c4ebfd5a6e2ce8db476de39a17488b85f424c\"" Jan 29 11:35:56.522011 containerd[1891]: time="2025-01-29T11:35:56.521983397Z" level=info msg="StartContainer for \"cb3b66c692ad13094da9368b195c4ebfd5a6e2ce8db476de39a17488b85f424c\"" Jan 29 11:35:56.626584 systemd[1]: Started cri-containerd-cb3b66c692ad13094da9368b195c4ebfd5a6e2ce8db476de39a17488b85f424c.scope - libcontainer container cb3b66c692ad13094da9368b195c4ebfd5a6e2ce8db476de39a17488b85f424c. Jan 29 11:35:56.660071 kubelet[2371]: E0129 11:35:56.660007 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:35:56.686359 containerd[1891]: time="2025-01-29T11:35:56.686317012Z" level=info msg="StartContainer for \"cb3b66c692ad13094da9368b195c4ebfd5a6e2ce8db476de39a17488b85f424c\" returns successfully" Jan 29 11:35:56.981923 kubelet[2371]: I0129 11:35:56.981856 2371 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-zbq7h" podStartSLOduration=8.462700198 podStartE2EDuration="13.981836169s" podCreationTimestamp="2025-01-29 11:35:43 +0000 UTC" firstStartedPulling="2025-01-29 11:35:50.960178989 +0000 UTC m=+27.083274747" lastFinishedPulling="2025-01-29 11:35:56.479314951 +0000 UTC m=+32.602410718" observedRunningTime="2025-01-29 11:35:56.981755854 +0000 UTC m=+33.104851629" watchObservedRunningTime="2025-01-29 11:35:56.981836169 +0000 UTC m=+33.104931945" Jan 29 11:35:57.661134 kubelet[2371]: E0129 11:35:57.661078 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:35:58.662275 kubelet[2371]: E0129 11:35:58.662222 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:35:59.662961 kubelet[2371]: E0129 11:35:59.662910 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:00.664326 kubelet[2371]: E0129 11:36:00.664252 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:01.665508 kubelet[2371]: E0129 11:36:01.665449 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:02.666468 kubelet[2371]: E0129 11:36:02.666409 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:03.667180 kubelet[2371]: E0129 11:36:03.667125 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:04.629760 kubelet[2371]: E0129 11:36:04.629705 2371 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:04.668363 kubelet[2371]: E0129 11:36:04.668307 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:05.168610 systemd[1]: Created slice kubepods-besteffort-pod475f0b70_ccbf_4a72_9ced_6f73972c2cd3.slice - libcontainer container kubepods-besteffort-pod475f0b70_ccbf_4a72_9ced_6f73972c2cd3.slice. Jan 29 11:36:05.312619 kubelet[2371]: I0129 11:36:05.312526 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/475f0b70-ccbf-4a72-9ced-6f73972c2cd3-data\") pod \"nfs-server-provisioner-0\" (UID: \"475f0b70-ccbf-4a72-9ced-6f73972c2cd3\") " pod="default/nfs-server-provisioner-0" Jan 29 11:36:05.312619 kubelet[2371]: I0129 11:36:05.312627 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4bp5\" (UniqueName: \"kubernetes.io/projected/475f0b70-ccbf-4a72-9ced-6f73972c2cd3-kube-api-access-m4bp5\") pod \"nfs-server-provisioner-0\" (UID: \"475f0b70-ccbf-4a72-9ced-6f73972c2cd3\") " pod="default/nfs-server-provisioner-0" Jan 29 11:36:05.483841 containerd[1891]: time="2025-01-29T11:36:05.483713356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:475f0b70-ccbf-4a72-9ced-6f73972c2cd3,Namespace:default,Attempt:0,}" Jan 29 11:36:05.554899 systemd-networkd[1734]: lxcb9efe9e4a77b: Link UP Jan 29 11:36:05.571438 kernel: eth0: renamed from tmpdf41d Jan 29 11:36:05.580291 (udev-worker)[3838]: Network interface NamePolicy= disabled on kernel command line. Jan 29 11:36:05.581013 systemd-networkd[1734]: lxcb9efe9e4a77b: Gained carrier Jan 29 11:36:05.671892 kubelet[2371]: E0129 11:36:05.671847 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:05.934827 containerd[1891]: time="2025-01-29T11:36:05.934721063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:36:05.934827 containerd[1891]: time="2025-01-29T11:36:05.934796532Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:36:05.935192 containerd[1891]: time="2025-01-29T11:36:05.934813126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:36:05.935996 containerd[1891]: time="2025-01-29T11:36:05.935918440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:36:05.969598 systemd[1]: Started cri-containerd-df41de4411145192be4b405e5ef0ecb7c48d91d0020af6d9b9aaf041c7caf4c7.scope - libcontainer container df41de4411145192be4b405e5ef0ecb7c48d91d0020af6d9b9aaf041c7caf4c7. Jan 29 11:36:06.029334 containerd[1891]: time="2025-01-29T11:36:06.029281277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:475f0b70-ccbf-4a72-9ced-6f73972c2cd3,Namespace:default,Attempt:0,} returns sandbox id \"df41de4411145192be4b405e5ef0ecb7c48d91d0020af6d9b9aaf041c7caf4c7\"" Jan 29 11:36:06.034692 containerd[1891]: time="2025-01-29T11:36:06.034510984Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 29 11:36:06.624503 systemd-networkd[1734]: lxcb9efe9e4a77b: Gained IPv6LL Jan 29 11:36:06.673008 kubelet[2371]: E0129 11:36:06.672956 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:07.673799 kubelet[2371]: E0129 11:36:07.673763 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:08.675736 kubelet[2371]: E0129 11:36:08.675690 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:09.007692 ntpd[1862]: Listen normally on 13 lxcb9efe9e4a77b [fe80::c42f:f7ff:fe3d:ed7c%11]:123 Jan 29 11:36:09.009685 ntpd[1862]: 29 Jan 11:36:09 ntpd[1862]: Listen normally on 13 lxcb9efe9e4a77b [fe80::c42f:f7ff:fe3d:ed7c%11]:123 Jan 29 11:36:09.393701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount42754710.mount: Deactivated successfully. Jan 29 11:36:09.676896 kubelet[2371]: E0129 11:36:09.676745 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:10.677633 kubelet[2371]: E0129 11:36:10.677520 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:11.678934 kubelet[2371]: E0129 11:36:11.678678 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:12.269863 containerd[1891]: time="2025-01-29T11:36:12.269807157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:12.271876 containerd[1891]: time="2025-01-29T11:36:12.271699225Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 29 11:36:12.277361 containerd[1891]: time="2025-01-29T11:36:12.277275330Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:12.282186 containerd[1891]: time="2025-01-29T11:36:12.281803897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:12.283156 containerd[1891]: time="2025-01-29T11:36:12.283111620Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.248482345s" Jan 29 11:36:12.283274 containerd[1891]: time="2025-01-29T11:36:12.283161490Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 29 11:36:12.286333 containerd[1891]: time="2025-01-29T11:36:12.286297462Z" level=info msg="CreateContainer within sandbox \"df41de4411145192be4b405e5ef0ecb7c48d91d0020af6d9b9aaf041c7caf4c7\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 29 11:36:12.315784 containerd[1891]: time="2025-01-29T11:36:12.315726480Z" level=info msg="CreateContainer within sandbox \"df41de4411145192be4b405e5ef0ecb7c48d91d0020af6d9b9aaf041c7caf4c7\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"7fa0548ee45105b426725f55f04a73b9108b964d535cbe93cb1eb6a28dd8f687\"" Jan 29 11:36:12.316479 containerd[1891]: time="2025-01-29T11:36:12.316450274Z" level=info msg="StartContainer for \"7fa0548ee45105b426725f55f04a73b9108b964d535cbe93cb1eb6a28dd8f687\"" Jan 29 11:36:12.383958 systemd[1]: run-containerd-runc-k8s.io-7fa0548ee45105b426725f55f04a73b9108b964d535cbe93cb1eb6a28dd8f687-runc.HZs7GZ.mount: Deactivated successfully. Jan 29 11:36:12.399637 systemd[1]: Started cri-containerd-7fa0548ee45105b426725f55f04a73b9108b964d535cbe93cb1eb6a28dd8f687.scope - libcontainer container 7fa0548ee45105b426725f55f04a73b9108b964d535cbe93cb1eb6a28dd8f687. Jan 29 11:36:12.451238 containerd[1891]: time="2025-01-29T11:36:12.451028971Z" level=info msg="StartContainer for \"7fa0548ee45105b426725f55f04a73b9108b964d535cbe93cb1eb6a28dd8f687\" returns successfully" Jan 29 11:36:12.679996 kubelet[2371]: E0129 11:36:12.679938 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:13.050219 kubelet[2371]: I0129 11:36:13.050149 2371 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.799574912 podStartE2EDuration="8.050126807s" podCreationTimestamp="2025-01-29 11:36:05 +0000 UTC" firstStartedPulling="2025-01-29 11:36:06.033990078 +0000 UTC m=+42.157085841" lastFinishedPulling="2025-01-29 11:36:12.284541974 +0000 UTC m=+48.407637736" observedRunningTime="2025-01-29 11:36:13.049855881 +0000 UTC m=+49.172951660" watchObservedRunningTime="2025-01-29 11:36:13.050126807 +0000 UTC m=+49.173222585" Jan 29 11:36:13.681093 kubelet[2371]: E0129 11:36:13.681032 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:14.681408 kubelet[2371]: E0129 11:36:14.681348 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:15.682494 kubelet[2371]: E0129 11:36:15.682449 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:16.682943 kubelet[2371]: E0129 11:36:16.682874 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:17.683757 kubelet[2371]: E0129 11:36:17.683705 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:18.684215 kubelet[2371]: E0129 11:36:18.684151 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:19.685367 kubelet[2371]: E0129 11:36:19.685321 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:20.686471 kubelet[2371]: E0129 11:36:20.686414 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:21.687260 kubelet[2371]: E0129 11:36:21.687212 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:22.177286 systemd[1]: Created slice kubepods-besteffort-pod3896e990_aca6_4fb8_8a71_24a0e705a68d.slice - libcontainer container kubepods-besteffort-pod3896e990_aca6_4fb8_8a71_24a0e705a68d.slice. Jan 29 11:36:22.266430 kubelet[2371]: I0129 11:36:22.266387 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms7hb\" (UniqueName: \"kubernetes.io/projected/3896e990-aca6-4fb8-8a71-24a0e705a68d-kube-api-access-ms7hb\") pod \"test-pod-1\" (UID: \"3896e990-aca6-4fb8-8a71-24a0e705a68d\") " pod="default/test-pod-1" Jan 29 11:36:22.266656 kubelet[2371]: I0129 11:36:22.266444 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b4dd6fc3-1cc2-44ef-9e82-f8690cdfb434\" (UniqueName: \"kubernetes.io/nfs/3896e990-aca6-4fb8-8a71-24a0e705a68d-pvc-b4dd6fc3-1cc2-44ef-9e82-f8690cdfb434\") pod \"test-pod-1\" (UID: \"3896e990-aca6-4fb8-8a71-24a0e705a68d\") " pod="default/test-pod-1" Jan 29 11:36:22.403415 kernel: FS-Cache: Loaded Jan 29 11:36:22.484738 kernel: RPC: Registered named UNIX socket transport module. Jan 29 11:36:22.484867 kernel: RPC: Registered udp transport module. Jan 29 11:36:22.484898 kernel: RPC: Registered tcp transport module. Jan 29 11:36:22.485568 kernel: RPC: Registered tcp-with-tls transport module. Jan 29 11:36:22.485640 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 29 11:36:22.689749 kubelet[2371]: E0129 11:36:22.688287 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:22.908144 kernel: NFS: Registering the id_resolver key type Jan 29 11:36:22.908361 kernel: Key type id_resolver registered Jan 29 11:36:22.908403 kernel: Key type id_legacy registered Jan 29 11:36:22.945582 nfsidmap[4030]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 29 11:36:22.951558 nfsidmap[4032]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 29 11:36:23.081840 containerd[1891]: time="2025-01-29T11:36:23.081786930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:3896e990-aca6-4fb8-8a71-24a0e705a68d,Namespace:default,Attempt:0,}" Jan 29 11:36:23.133574 (udev-worker)[4023]: Network interface NamePolicy= disabled on kernel command line. Jan 29 11:36:23.134430 (udev-worker)[4031]: Network interface NamePolicy= disabled on kernel command line. Jan 29 11:36:23.136207 systemd-networkd[1734]: lxcf1e73c3a7abe: Link UP Jan 29 11:36:23.146413 kernel: eth0: renamed from tmpf5908 Jan 29 11:36:23.156783 systemd-networkd[1734]: lxcf1e73c3a7abe: Gained carrier Jan 29 11:36:23.442697 containerd[1891]: time="2025-01-29T11:36:23.441829317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:36:23.442697 containerd[1891]: time="2025-01-29T11:36:23.442187100Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:36:23.442697 containerd[1891]: time="2025-01-29T11:36:23.442211133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:36:23.442697 containerd[1891]: time="2025-01-29T11:36:23.442460775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:36:23.505792 systemd[1]: Started cri-containerd-f5908c5ce291d7bcb2943714d2ceac6ba02874ed079f60ce249f66826f595d83.scope - libcontainer container f5908c5ce291d7bcb2943714d2ceac6ba02874ed079f60ce249f66826f595d83. Jan 29 11:36:23.581613 containerd[1891]: time="2025-01-29T11:36:23.581565161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:3896e990-aca6-4fb8-8a71-24a0e705a68d,Namespace:default,Attempt:0,} returns sandbox id \"f5908c5ce291d7bcb2943714d2ceac6ba02874ed079f60ce249f66826f595d83\"" Jan 29 11:36:23.592454 containerd[1891]: time="2025-01-29T11:36:23.592409574Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 11:36:23.688548 kubelet[2371]: E0129 11:36:23.688479 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:23.979320 containerd[1891]: time="2025-01-29T11:36:23.979269225Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:23.982175 containerd[1891]: time="2025-01-29T11:36:23.981484375Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 29 11:36:23.985176 containerd[1891]: time="2025-01-29T11:36:23.985127086Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 392.665287ms" Jan 29 11:36:23.985176 containerd[1891]: time="2025-01-29T11:36:23.985163123Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 29 11:36:23.987973 containerd[1891]: time="2025-01-29T11:36:23.987886864Z" level=info msg="CreateContainer within sandbox \"f5908c5ce291d7bcb2943714d2ceac6ba02874ed079f60ce249f66826f595d83\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 29 11:36:24.031192 containerd[1891]: time="2025-01-29T11:36:24.031142221Z" level=info msg="CreateContainer within sandbox \"f5908c5ce291d7bcb2943714d2ceac6ba02874ed079f60ce249f66826f595d83\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"bb0fcbc356919b7678e3b65f0cf8e2d12d01b27657f41d3c633f3fd500132187\"" Jan 29 11:36:24.032191 containerd[1891]: time="2025-01-29T11:36:24.032154414Z" level=info msg="StartContainer for \"bb0fcbc356919b7678e3b65f0cf8e2d12d01b27657f41d3c633f3fd500132187\"" Jan 29 11:36:24.112707 systemd[1]: Started cri-containerd-bb0fcbc356919b7678e3b65f0cf8e2d12d01b27657f41d3c633f3fd500132187.scope - libcontainer container bb0fcbc356919b7678e3b65f0cf8e2d12d01b27657f41d3c633f3fd500132187. Jan 29 11:36:24.156501 containerd[1891]: time="2025-01-29T11:36:24.156449618Z" level=info msg="StartContainer for \"bb0fcbc356919b7678e3b65f0cf8e2d12d01b27657f41d3c633f3fd500132187\" returns successfully" Jan 29 11:36:24.629897 kubelet[2371]: E0129 11:36:24.629806 2371 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:24.689127 kubelet[2371]: E0129 11:36:24.689069 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:24.795410 systemd-networkd[1734]: lxcf1e73c3a7abe: Gained IPv6LL Jan 29 11:36:25.107668 kubelet[2371]: I0129 11:36:25.107599 2371 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=19.710233987 podStartE2EDuration="20.107580035s" podCreationTimestamp="2025-01-29 11:36:05 +0000 UTC" firstStartedPulling="2025-01-29 11:36:23.588803041 +0000 UTC m=+59.711898800" lastFinishedPulling="2025-01-29 11:36:23.986149092 +0000 UTC m=+60.109244848" observedRunningTime="2025-01-29 11:36:25.1071374 +0000 UTC m=+61.230233175" watchObservedRunningTime="2025-01-29 11:36:25.107580035 +0000 UTC m=+61.230675812" Jan 29 11:36:25.690435 kubelet[2371]: E0129 11:36:25.690356 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:26.690824 kubelet[2371]: E0129 11:36:26.690763 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:26.998279 ntpd[1862]: Listen normally on 14 lxcf1e73c3a7abe [fe80::30bb:dff:fed9:4b02%13]:123 Jan 29 11:36:26.999551 ntpd[1862]: 29 Jan 11:36:26 ntpd[1862]: Listen normally on 14 lxcf1e73c3a7abe [fe80::30bb:dff:fed9:4b02%13]:123 Jan 29 11:36:27.691723 kubelet[2371]: E0129 11:36:27.691664 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:28.692308 kubelet[2371]: E0129 11:36:28.692249 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:29.692493 kubelet[2371]: E0129 11:36:29.692439 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:30.641212 systemd[1]: run-containerd-runc-k8s.io-b114b11889e7959069f567bd0cbefe80d82e7ce69303fa8fb86217585a03f96f-runc.gUJzVc.mount: Deactivated successfully. Jan 29 11:36:30.664423 containerd[1891]: time="2025-01-29T11:36:30.664345472Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:36:30.675320 containerd[1891]: time="2025-01-29T11:36:30.675280831Z" level=info msg="StopContainer for \"b114b11889e7959069f567bd0cbefe80d82e7ce69303fa8fb86217585a03f96f\" with timeout 2 (s)" Jan 29 11:36:30.675705 containerd[1891]: time="2025-01-29T11:36:30.675674399Z" level=info msg="Stop container \"b114b11889e7959069f567bd0cbefe80d82e7ce69303fa8fb86217585a03f96f\" with signal terminated" Jan 29 11:36:30.686664 systemd-networkd[1734]: lxc_health: Link DOWN Jan 29 11:36:30.686676 systemd-networkd[1734]: lxc_health: Lost carrier Jan 29 11:36:30.696314 kubelet[2371]: E0129 11:36:30.692646 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:30.712095 systemd[1]: cri-containerd-b114b11889e7959069f567bd0cbefe80d82e7ce69303fa8fb86217585a03f96f.scope: Deactivated successfully. Jan 29 11:36:30.712728 systemd[1]: cri-containerd-b114b11889e7959069f567bd0cbefe80d82e7ce69303fa8fb86217585a03f96f.scope: Consumed 8.868s CPU time. Jan 29 11:36:30.769599 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b114b11889e7959069f567bd0cbefe80d82e7ce69303fa8fb86217585a03f96f-rootfs.mount: Deactivated successfully. Jan 29 11:36:30.800888 containerd[1891]: time="2025-01-29T11:36:30.800645923Z" level=info msg="shim disconnected" id=b114b11889e7959069f567bd0cbefe80d82e7ce69303fa8fb86217585a03f96f namespace=k8s.io Jan 29 11:36:30.800888 containerd[1891]: time="2025-01-29T11:36:30.800883260Z" level=warning msg="cleaning up after shim disconnected" id=b114b11889e7959069f567bd0cbefe80d82e7ce69303fa8fb86217585a03f96f namespace=k8s.io Jan 29 11:36:30.801285 containerd[1891]: time="2025-01-29T11:36:30.800897914Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:36:30.824400 containerd[1891]: time="2025-01-29T11:36:30.824337214Z" level=info msg="StopContainer for \"b114b11889e7959069f567bd0cbefe80d82e7ce69303fa8fb86217585a03f96f\" returns successfully" Jan 29 11:36:30.825458 containerd[1891]: time="2025-01-29T11:36:30.825426793Z" level=info msg="StopPodSandbox for \"05e8268e00942ac972a3a11a0a4fbea2cb2cd9cf43ad6c0b91559961961ac32d\"" Jan 29 11:36:30.825574 containerd[1891]: time="2025-01-29T11:36:30.825468940Z" level=info msg="Container to stop \"bb647d2f8bae09b04b9bdd4f7c439ff14cc68ac59b13b74b21fdbfbd5c649984\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:36:30.825574 containerd[1891]: time="2025-01-29T11:36:30.825520544Z" level=info msg="Container to stop \"110baf55fc99da99cefc62bd87ff1a420906ffda3ad9d88450ccb9fbeba4a9a2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:36:30.825574 containerd[1891]: time="2025-01-29T11:36:30.825537293Z" level=info msg="Container to stop \"8b741616e6088a008ee467ec7d3c3178b9556bd4e47af35d3c8036de8b4decb5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:36:30.825574 containerd[1891]: time="2025-01-29T11:36:30.825550893Z" level=info msg="Container to stop \"79df8a6e7dd109abe58429c05590f4f6eb33ac4aa79e44ce2050492c35576caa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:36:30.825574 containerd[1891]: time="2025-01-29T11:36:30.825565033Z" level=info msg="Container to stop \"b114b11889e7959069f567bd0cbefe80d82e7ce69303fa8fb86217585a03f96f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:36:30.830031 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-05e8268e00942ac972a3a11a0a4fbea2cb2cd9cf43ad6c0b91559961961ac32d-shm.mount: Deactivated successfully. Jan 29 11:36:30.838263 systemd[1]: cri-containerd-05e8268e00942ac972a3a11a0a4fbea2cb2cd9cf43ad6c0b91559961961ac32d.scope: Deactivated successfully. Jan 29 11:36:30.869547 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05e8268e00942ac972a3a11a0a4fbea2cb2cd9cf43ad6c0b91559961961ac32d-rootfs.mount: Deactivated successfully. Jan 29 11:36:30.877542 containerd[1891]: time="2025-01-29T11:36:30.877166063Z" level=info msg="shim disconnected" id=05e8268e00942ac972a3a11a0a4fbea2cb2cd9cf43ad6c0b91559961961ac32d namespace=k8s.io Jan 29 11:36:30.877542 containerd[1891]: time="2025-01-29T11:36:30.877233457Z" level=warning msg="cleaning up after shim disconnected" id=05e8268e00942ac972a3a11a0a4fbea2cb2cd9cf43ad6c0b91559961961ac32d namespace=k8s.io Jan 29 11:36:30.877542 containerd[1891]: time="2025-01-29T11:36:30.877247061Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:36:30.897438 containerd[1891]: time="2025-01-29T11:36:30.897055982Z" level=info msg="TearDown network for sandbox \"05e8268e00942ac972a3a11a0a4fbea2cb2cd9cf43ad6c0b91559961961ac32d\" successfully" Jan 29 11:36:30.897438 containerd[1891]: time="2025-01-29T11:36:30.897094546Z" level=info msg="StopPodSandbox for \"05e8268e00942ac972a3a11a0a4fbea2cb2cd9cf43ad6c0b91559961961ac32d\" returns successfully" Jan 29 11:36:31.024280 kubelet[2371]: I0129 11:36:31.023730 2371 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-cilium-run\") pod \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\" (UID: \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\") " Jan 29 11:36:31.024280 kubelet[2371]: I0129 11:36:31.023777 2371 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-host-proc-sys-net\") pod \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\" (UID: \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\") " Jan 29 11:36:31.024280 kubelet[2371]: I0129 11:36:31.023805 2371 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-host-proc-sys-kernel\") pod \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\" (UID: \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\") " Jan 29 11:36:31.024280 kubelet[2371]: I0129 11:36:31.023814 2371 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2bce1cde-c6ac-4931-a09b-22d2bcd4ad21" (UID: "2bce1cde-c6ac-4931-a09b-22d2bcd4ad21"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:36:31.024280 kubelet[2371]: I0129 11:36:31.023832 2371 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-cni-path\") pod \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\" (UID: \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\") " Jan 29 11:36:31.024280 kubelet[2371]: I0129 11:36:31.023865 2371 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-clustermesh-secrets\") pod \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\" (UID: \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\") " Jan 29 11:36:31.024711 kubelet[2371]: I0129 11:36:31.023873 2371 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2bce1cde-c6ac-4931-a09b-22d2bcd4ad21" (UID: "2bce1cde-c6ac-4931-a09b-22d2bcd4ad21"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:36:31.024711 kubelet[2371]: I0129 11:36:31.023889 2371 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-xtables-lock\") pod \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\" (UID: \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\") " Jan 29 11:36:31.024711 kubelet[2371]: I0129 11:36:31.023897 2371 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2bce1cde-c6ac-4931-a09b-22d2bcd4ad21" (UID: "2bce1cde-c6ac-4931-a09b-22d2bcd4ad21"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:36:31.024711 kubelet[2371]: I0129 11:36:31.023918 2371 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-bpf-maps\") pod \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\" (UID: \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\") " Jan 29 11:36:31.024711 kubelet[2371]: I0129 11:36:31.023943 2371 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-hubble-tls\") pod \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\" (UID: \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\") " Jan 29 11:36:31.024937 kubelet[2371]: I0129 11:36:31.023968 2371 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gc5l5\" (UniqueName: \"kubernetes.io/projected/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-kube-api-access-gc5l5\") pod \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\" (UID: \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\") " Jan 29 11:36:31.024937 kubelet[2371]: I0129 11:36:31.023991 2371 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-lib-modules\") pod \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\" (UID: \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\") " Jan 29 11:36:31.024937 kubelet[2371]: I0129 11:36:31.024012 2371 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-hostproc\") pod \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\" (UID: \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\") " Jan 29 11:36:31.024937 kubelet[2371]: I0129 11:36:31.024074 2371 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-cilium-config-path\") pod \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\" (UID: \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\") " Jan 29 11:36:31.024937 kubelet[2371]: I0129 11:36:31.024101 2371 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-cilium-cgroup\") pod \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\" (UID: \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\") " Jan 29 11:36:31.024937 kubelet[2371]: I0129 11:36:31.024122 2371 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-etc-cni-netd\") pod \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\" (UID: \"2bce1cde-c6ac-4931-a09b-22d2bcd4ad21\") " Jan 29 11:36:31.025235 kubelet[2371]: I0129 11:36:31.024158 2371 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-cilium-run\") on node \"172.31.25.200\" DevicePath \"\"" Jan 29 11:36:31.025235 kubelet[2371]: I0129 11:36:31.024172 2371 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-host-proc-sys-net\") on node \"172.31.25.200\" DevicePath \"\"" Jan 29 11:36:31.025235 kubelet[2371]: I0129 11:36:31.024187 2371 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-host-proc-sys-kernel\") on node \"172.31.25.200\" DevicePath \"\"" Jan 29 11:36:31.025235 kubelet[2371]: I0129 11:36:31.024223 2371 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2bce1cde-c6ac-4931-a09b-22d2bcd4ad21" (UID: "2bce1cde-c6ac-4931-a09b-22d2bcd4ad21"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:36:31.025235 kubelet[2371]: I0129 11:36:31.024248 2371 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2bce1cde-c6ac-4931-a09b-22d2bcd4ad21" (UID: "2bce1cde-c6ac-4931-a09b-22d2bcd4ad21"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:36:31.025235 kubelet[2371]: I0129 11:36:31.024265 2371 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2bce1cde-c6ac-4931-a09b-22d2bcd4ad21" (UID: "2bce1cde-c6ac-4931-a09b-22d2bcd4ad21"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:36:31.028410 kubelet[2371]: I0129 11:36:31.027793 2371 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-cni-path" (OuterVolumeSpecName: "cni-path") pod "2bce1cde-c6ac-4931-a09b-22d2bcd4ad21" (UID: "2bce1cde-c6ac-4931-a09b-22d2bcd4ad21"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:36:31.028410 kubelet[2371]: I0129 11:36:31.027858 2371 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-hostproc" (OuterVolumeSpecName: "hostproc") pod "2bce1cde-c6ac-4931-a09b-22d2bcd4ad21" (UID: "2bce1cde-c6ac-4931-a09b-22d2bcd4ad21"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:36:31.028410 kubelet[2371]: I0129 11:36:31.027892 2371 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2bce1cde-c6ac-4931-a09b-22d2bcd4ad21" (UID: "2bce1cde-c6ac-4931-a09b-22d2bcd4ad21"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:36:31.028410 kubelet[2371]: I0129 11:36:31.028354 2371 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2bce1cde-c6ac-4931-a09b-22d2bcd4ad21" (UID: "2bce1cde-c6ac-4931-a09b-22d2bcd4ad21"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:36:31.029146 kubelet[2371]: I0129 11:36:31.029118 2371 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2bce1cde-c6ac-4931-a09b-22d2bcd4ad21" (UID: "2bce1cde-c6ac-4931-a09b-22d2bcd4ad21"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:36:31.031648 kubelet[2371]: I0129 11:36:31.031612 2371 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2bce1cde-c6ac-4931-a09b-22d2bcd4ad21" (UID: "2bce1cde-c6ac-4931-a09b-22d2bcd4ad21"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:36:31.031868 kubelet[2371]: I0129 11:36:31.031843 2371 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2bce1cde-c6ac-4931-a09b-22d2bcd4ad21" (UID: "2bce1cde-c6ac-4931-a09b-22d2bcd4ad21"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:36:31.033526 kubelet[2371]: I0129 11:36:31.033497 2371 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-kube-api-access-gc5l5" (OuterVolumeSpecName: "kube-api-access-gc5l5") pod "2bce1cde-c6ac-4931-a09b-22d2bcd4ad21" (UID: "2bce1cde-c6ac-4931-a09b-22d2bcd4ad21"). InnerVolumeSpecName "kube-api-access-gc5l5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:36:31.110417 kubelet[2371]: I0129 11:36:31.109352 2371 scope.go:117] "RemoveContainer" containerID="b114b11889e7959069f567bd0cbefe80d82e7ce69303fa8fb86217585a03f96f" Jan 29 11:36:31.113516 containerd[1891]: time="2025-01-29T11:36:31.112748981Z" level=info msg="RemoveContainer for \"b114b11889e7959069f567bd0cbefe80d82e7ce69303fa8fb86217585a03f96f\"" Jan 29 11:36:31.116562 systemd[1]: Removed slice kubepods-burstable-pod2bce1cde_c6ac_4931_a09b_22d2bcd4ad21.slice - libcontainer container kubepods-burstable-pod2bce1cde_c6ac_4931_a09b_22d2bcd4ad21.slice. Jan 29 11:36:31.117104 systemd[1]: kubepods-burstable-pod2bce1cde_c6ac_4931_a09b_22d2bcd4ad21.slice: Consumed 8.978s CPU time. Jan 29 11:36:31.120683 containerd[1891]: time="2025-01-29T11:36:31.120506813Z" level=info msg="RemoveContainer for \"b114b11889e7959069f567bd0cbefe80d82e7ce69303fa8fb86217585a03f96f\" returns successfully" Jan 29 11:36:31.120916 kubelet[2371]: I0129 11:36:31.120886 2371 scope.go:117] "RemoveContainer" containerID="79df8a6e7dd109abe58429c05590f4f6eb33ac4aa79e44ce2050492c35576caa" Jan 29 11:36:31.124131 containerd[1891]: time="2025-01-29T11:36:31.122852579Z" level=info msg="RemoveContainer for \"79df8a6e7dd109abe58429c05590f4f6eb33ac4aa79e44ce2050492c35576caa\"" Jan 29 11:36:31.124515 kubelet[2371]: I0129 11:36:31.124490 2371 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-bpf-maps\") on node \"172.31.25.200\" DevicePath \"\"" Jan 29 11:36:31.124603 kubelet[2371]: I0129 11:36:31.124515 2371 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-cni-path\") on node \"172.31.25.200\" DevicePath \"\"" Jan 29 11:36:31.124603 kubelet[2371]: I0129 11:36:31.124531 2371 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-clustermesh-secrets\") on node \"172.31.25.200\" DevicePath \"\"" Jan 29 11:36:31.124603 kubelet[2371]: I0129 11:36:31.124544 2371 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-xtables-lock\") on node \"172.31.25.200\" DevicePath \"\"" Jan 29 11:36:31.124603 kubelet[2371]: I0129 11:36:31.124556 2371 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-gc5l5\" (UniqueName: \"kubernetes.io/projected/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-kube-api-access-gc5l5\") on node \"172.31.25.200\" DevicePath \"\"" Jan 29 11:36:31.124603 kubelet[2371]: I0129 11:36:31.124569 2371 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-hubble-tls\") on node \"172.31.25.200\" DevicePath \"\"" Jan 29 11:36:31.125813 kubelet[2371]: I0129 11:36:31.124841 2371 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-etc-cni-netd\") on node \"172.31.25.200\" DevicePath \"\"" Jan 29 11:36:31.125813 kubelet[2371]: I0129 11:36:31.124871 2371 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-lib-modules\") on node \"172.31.25.200\" DevicePath \"\"" Jan 29 11:36:31.125813 kubelet[2371]: I0129 11:36:31.124884 2371 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-hostproc\") on node \"172.31.25.200\" DevicePath \"\"" Jan 29 11:36:31.125813 kubelet[2371]: I0129 11:36:31.124925 2371 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-cilium-config-path\") on node \"172.31.25.200\" DevicePath \"\"" Jan 29 11:36:31.125813 kubelet[2371]: I0129 11:36:31.124939 2371 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21-cilium-cgroup\") on node \"172.31.25.200\" DevicePath \"\"" Jan 29 11:36:31.128921 containerd[1891]: time="2025-01-29T11:36:31.128882812Z" level=info msg="RemoveContainer for \"79df8a6e7dd109abe58429c05590f4f6eb33ac4aa79e44ce2050492c35576caa\" returns successfully" Jan 29 11:36:31.129234 kubelet[2371]: I0129 11:36:31.129209 2371 scope.go:117] "RemoveContainer" containerID="8b741616e6088a008ee467ec7d3c3178b9556bd4e47af35d3c8036de8b4decb5" Jan 29 11:36:31.130528 containerd[1891]: time="2025-01-29T11:36:31.130480208Z" level=info msg="RemoveContainer for \"8b741616e6088a008ee467ec7d3c3178b9556bd4e47af35d3c8036de8b4decb5\"" Jan 29 11:36:31.136453 containerd[1891]: time="2025-01-29T11:36:31.136407678Z" level=info msg="RemoveContainer for \"8b741616e6088a008ee467ec7d3c3178b9556bd4e47af35d3c8036de8b4decb5\" returns successfully" Jan 29 11:36:31.136668 kubelet[2371]: I0129 11:36:31.136633 2371 scope.go:117] "RemoveContainer" containerID="110baf55fc99da99cefc62bd87ff1a420906ffda3ad9d88450ccb9fbeba4a9a2" Jan 29 11:36:31.138033 containerd[1891]: time="2025-01-29T11:36:31.137998055Z" level=info msg="RemoveContainer for \"110baf55fc99da99cefc62bd87ff1a420906ffda3ad9d88450ccb9fbeba4a9a2\"" Jan 29 11:36:31.143433 containerd[1891]: time="2025-01-29T11:36:31.143189900Z" level=info msg="RemoveContainer for \"110baf55fc99da99cefc62bd87ff1a420906ffda3ad9d88450ccb9fbeba4a9a2\" returns successfully" Jan 29 11:36:31.143965 kubelet[2371]: I0129 11:36:31.143641 2371 scope.go:117] "RemoveContainer" containerID="bb647d2f8bae09b04b9bdd4f7c439ff14cc68ac59b13b74b21fdbfbd5c649984" Jan 29 11:36:31.145277 containerd[1891]: time="2025-01-29T11:36:31.145223200Z" level=info msg="RemoveContainer for \"bb647d2f8bae09b04b9bdd4f7c439ff14cc68ac59b13b74b21fdbfbd5c649984\"" Jan 29 11:36:31.150607 containerd[1891]: time="2025-01-29T11:36:31.150489597Z" level=info msg="RemoveContainer for \"bb647d2f8bae09b04b9bdd4f7c439ff14cc68ac59b13b74b21fdbfbd5c649984\" returns successfully" Jan 29 11:36:31.152102 kubelet[2371]: I0129 11:36:31.152054 2371 scope.go:117] "RemoveContainer" containerID="b114b11889e7959069f567bd0cbefe80d82e7ce69303fa8fb86217585a03f96f" Jan 29 11:36:31.152439 containerd[1891]: time="2025-01-29T11:36:31.152373693Z" level=error msg="ContainerStatus for \"b114b11889e7959069f567bd0cbefe80d82e7ce69303fa8fb86217585a03f96f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b114b11889e7959069f567bd0cbefe80d82e7ce69303fa8fb86217585a03f96f\": not found" Jan 29 11:36:31.153087 kubelet[2371]: E0129 11:36:31.152722 2371 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b114b11889e7959069f567bd0cbefe80d82e7ce69303fa8fb86217585a03f96f\": not found" containerID="b114b11889e7959069f567bd0cbefe80d82e7ce69303fa8fb86217585a03f96f" Jan 29 11:36:31.153087 kubelet[2371]: I0129 11:36:31.152758 2371 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b114b11889e7959069f567bd0cbefe80d82e7ce69303fa8fb86217585a03f96f"} err="failed to get container status \"b114b11889e7959069f567bd0cbefe80d82e7ce69303fa8fb86217585a03f96f\": rpc error: code = NotFound desc = an error occurred when try to find container \"b114b11889e7959069f567bd0cbefe80d82e7ce69303fa8fb86217585a03f96f\": not found" Jan 29 11:36:31.154410 kubelet[2371]: I0129 11:36:31.152858 2371 scope.go:117] "RemoveContainer" containerID="79df8a6e7dd109abe58429c05590f4f6eb33ac4aa79e44ce2050492c35576caa" Jan 29 11:36:31.154551 containerd[1891]: time="2025-01-29T11:36:31.153839201Z" level=error msg="ContainerStatus for \"79df8a6e7dd109abe58429c05590f4f6eb33ac4aa79e44ce2050492c35576caa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"79df8a6e7dd109abe58429c05590f4f6eb33ac4aa79e44ce2050492c35576caa\": not found" Jan 29 11:36:31.154962 kubelet[2371]: E0129 11:36:31.154347 2371 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"79df8a6e7dd109abe58429c05590f4f6eb33ac4aa79e44ce2050492c35576caa\": not found" containerID="79df8a6e7dd109abe58429c05590f4f6eb33ac4aa79e44ce2050492c35576caa" Jan 29 11:36:31.154962 kubelet[2371]: I0129 11:36:31.154622 2371 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"79df8a6e7dd109abe58429c05590f4f6eb33ac4aa79e44ce2050492c35576caa"} err="failed to get container status \"79df8a6e7dd109abe58429c05590f4f6eb33ac4aa79e44ce2050492c35576caa\": rpc error: code = NotFound desc = an error occurred when try to find container \"79df8a6e7dd109abe58429c05590f4f6eb33ac4aa79e44ce2050492c35576caa\": not found" Jan 29 11:36:31.154962 kubelet[2371]: I0129 11:36:31.154806 2371 scope.go:117] "RemoveContainer" containerID="8b741616e6088a008ee467ec7d3c3178b9556bd4e47af35d3c8036de8b4decb5" Jan 29 11:36:31.156053 containerd[1891]: time="2025-01-29T11:36:31.155475095Z" level=error msg="ContainerStatus for \"8b741616e6088a008ee467ec7d3c3178b9556bd4e47af35d3c8036de8b4decb5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8b741616e6088a008ee467ec7d3c3178b9556bd4e47af35d3c8036de8b4decb5\": not found" Jan 29 11:36:31.156143 kubelet[2371]: E0129 11:36:31.155683 2371 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8b741616e6088a008ee467ec7d3c3178b9556bd4e47af35d3c8036de8b4decb5\": not found" containerID="8b741616e6088a008ee467ec7d3c3178b9556bd4e47af35d3c8036de8b4decb5" Jan 29 11:36:31.156143 kubelet[2371]: I0129 11:36:31.155715 2371 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8b741616e6088a008ee467ec7d3c3178b9556bd4e47af35d3c8036de8b4decb5"} err="failed to get container status \"8b741616e6088a008ee467ec7d3c3178b9556bd4e47af35d3c8036de8b4decb5\": rpc error: code = NotFound desc = an error occurred when try to find container \"8b741616e6088a008ee467ec7d3c3178b9556bd4e47af35d3c8036de8b4decb5\": not found" Jan 29 11:36:31.156143 kubelet[2371]: I0129 11:36:31.155741 2371 scope.go:117] "RemoveContainer" containerID="110baf55fc99da99cefc62bd87ff1a420906ffda3ad9d88450ccb9fbeba4a9a2" Jan 29 11:36:31.156277 containerd[1891]: time="2025-01-29T11:36:31.156214871Z" level=error msg="ContainerStatus for \"110baf55fc99da99cefc62bd87ff1a420906ffda3ad9d88450ccb9fbeba4a9a2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"110baf55fc99da99cefc62bd87ff1a420906ffda3ad9d88450ccb9fbeba4a9a2\": not found" Jan 29 11:36:31.156405 kubelet[2371]: E0129 11:36:31.156342 2371 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"110baf55fc99da99cefc62bd87ff1a420906ffda3ad9d88450ccb9fbeba4a9a2\": not found" containerID="110baf55fc99da99cefc62bd87ff1a420906ffda3ad9d88450ccb9fbeba4a9a2" Jan 29 11:36:31.156405 kubelet[2371]: I0129 11:36:31.156391 2371 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"110baf55fc99da99cefc62bd87ff1a420906ffda3ad9d88450ccb9fbeba4a9a2"} err="failed to get container status \"110baf55fc99da99cefc62bd87ff1a420906ffda3ad9d88450ccb9fbeba4a9a2\": rpc error: code = NotFound desc = an error occurred when try to find container \"110baf55fc99da99cefc62bd87ff1a420906ffda3ad9d88450ccb9fbeba4a9a2\": not found" Jan 29 11:36:31.157026 kubelet[2371]: I0129 11:36:31.156414 2371 scope.go:117] "RemoveContainer" containerID="bb647d2f8bae09b04b9bdd4f7c439ff14cc68ac59b13b74b21fdbfbd5c649984" Jan 29 11:36:31.157092 containerd[1891]: time="2025-01-29T11:36:31.156717162Z" level=error msg="ContainerStatus for \"bb647d2f8bae09b04b9bdd4f7c439ff14cc68ac59b13b74b21fdbfbd5c649984\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bb647d2f8bae09b04b9bdd4f7c439ff14cc68ac59b13b74b21fdbfbd5c649984\": not found" Jan 29 11:36:31.157142 kubelet[2371]: E0129 11:36:31.156844 2371 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bb647d2f8bae09b04b9bdd4f7c439ff14cc68ac59b13b74b21fdbfbd5c649984\": not found" containerID="bb647d2f8bae09b04b9bdd4f7c439ff14cc68ac59b13b74b21fdbfbd5c649984" Jan 29 11:36:31.157142 kubelet[2371]: I0129 11:36:31.157060 2371 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bb647d2f8bae09b04b9bdd4f7c439ff14cc68ac59b13b74b21fdbfbd5c649984"} err="failed to get container status \"bb647d2f8bae09b04b9bdd4f7c439ff14cc68ac59b13b74b21fdbfbd5c649984\": rpc error: code = NotFound desc = an error occurred when try to find container \"bb647d2f8bae09b04b9bdd4f7c439ff14cc68ac59b13b74b21fdbfbd5c649984\": not found" Jan 29 11:36:31.628724 systemd[1]: var-lib-kubelet-pods-2bce1cde\x2dc6ac\x2d4931\x2da09b\x2d22d2bcd4ad21-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgc5l5.mount: Deactivated successfully. Jan 29 11:36:31.628867 systemd[1]: var-lib-kubelet-pods-2bce1cde\x2dc6ac\x2d4931\x2da09b\x2d22d2bcd4ad21-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 11:36:31.628949 systemd[1]: var-lib-kubelet-pods-2bce1cde\x2dc6ac\x2d4931\x2da09b\x2d22d2bcd4ad21-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 11:36:31.693446 kubelet[2371]: E0129 11:36:31.693369 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:32.694178 kubelet[2371]: E0129 11:36:32.694126 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:32.796387 kubelet[2371]: I0129 11:36:32.796328 2371 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bce1cde-c6ac-4931-a09b-22d2bcd4ad21" path="/var/lib/kubelet/pods/2bce1cde-c6ac-4931-a09b-22d2bcd4ad21/volumes" Jan 29 11:36:32.998252 ntpd[1862]: Deleting interface #11 lxc_health, fe80::c0d0:28ff:fe87:bb12%7#123, interface stats: received=0, sent=0, dropped=0, active_time=44 secs Jan 29 11:36:32.999667 ntpd[1862]: 29 Jan 11:36:32 ntpd[1862]: Deleting interface #11 lxc_health, fe80::c0d0:28ff:fe87:bb12%7#123, interface stats: received=0, sent=0, dropped=0, active_time=44 secs Jan 29 11:36:33.545577 kubelet[2371]: E0129 11:36:33.545534 2371 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2bce1cde-c6ac-4931-a09b-22d2bcd4ad21" containerName="mount-cgroup" Jan 29 11:36:33.545577 kubelet[2371]: E0129 11:36:33.545566 2371 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2bce1cde-c6ac-4931-a09b-22d2bcd4ad21" containerName="apply-sysctl-overwrites" Jan 29 11:36:33.545577 kubelet[2371]: E0129 11:36:33.545575 2371 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2bce1cde-c6ac-4931-a09b-22d2bcd4ad21" containerName="mount-bpf-fs" Jan 29 11:36:33.545577 kubelet[2371]: E0129 11:36:33.545583 2371 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2bce1cde-c6ac-4931-a09b-22d2bcd4ad21" containerName="clean-cilium-state" Jan 29 11:36:33.545577 kubelet[2371]: E0129 11:36:33.545590 2371 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2bce1cde-c6ac-4931-a09b-22d2bcd4ad21" containerName="cilium-agent" Jan 29 11:36:33.545913 kubelet[2371]: I0129 11:36:33.545618 2371 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bce1cde-c6ac-4931-a09b-22d2bcd4ad21" containerName="cilium-agent" Jan 29 11:36:33.556253 systemd[1]: Created slice kubepods-besteffort-pod934148ff_2ef6_4d1e_a8f1_de9833cfd594.slice - libcontainer container kubepods-besteffort-pod934148ff_2ef6_4d1e_a8f1_de9833cfd594.slice. Jan 29 11:36:33.637880 kubelet[2371]: I0129 11:36:33.637845 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmwqq\" (UniqueName: \"kubernetes.io/projected/934148ff-2ef6-4d1e-a8f1-de9833cfd594-kube-api-access-bmwqq\") pod \"cilium-operator-5d85765b45-dk6qg\" (UID: \"934148ff-2ef6-4d1e-a8f1-de9833cfd594\") " pod="kube-system/cilium-operator-5d85765b45-dk6qg" Jan 29 11:36:33.640518 kubelet[2371]: I0129 11:36:33.637893 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/934148ff-2ef6-4d1e-a8f1-de9833cfd594-cilium-config-path\") pod \"cilium-operator-5d85765b45-dk6qg\" (UID: \"934148ff-2ef6-4d1e-a8f1-de9833cfd594\") " pod="kube-system/cilium-operator-5d85765b45-dk6qg" Jan 29 11:36:33.654769 systemd[1]: Created slice kubepods-burstable-pod616b09aa_7726_4259_8e74_549c359f57fb.slice - libcontainer container kubepods-burstable-pod616b09aa_7726_4259_8e74_549c359f57fb.slice. Jan 29 11:36:33.694369 kubelet[2371]: E0129 11:36:33.694319 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:33.738906 kubelet[2371]: I0129 11:36:33.738847 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/616b09aa-7726-4259-8e74-549c359f57fb-hubble-tls\") pod \"cilium-nf65c\" (UID: \"616b09aa-7726-4259-8e74-549c359f57fb\") " pod="kube-system/cilium-nf65c" Jan 29 11:36:33.738906 kubelet[2371]: I0129 11:36:33.738916 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/616b09aa-7726-4259-8e74-549c359f57fb-bpf-maps\") pod \"cilium-nf65c\" (UID: \"616b09aa-7726-4259-8e74-549c359f57fb\") " pod="kube-system/cilium-nf65c" Jan 29 11:36:33.739120 kubelet[2371]: I0129 11:36:33.738940 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/616b09aa-7726-4259-8e74-549c359f57fb-hostproc\") pod \"cilium-nf65c\" (UID: \"616b09aa-7726-4259-8e74-549c359f57fb\") " pod="kube-system/cilium-nf65c" Jan 29 11:36:33.739120 kubelet[2371]: I0129 11:36:33.738965 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/616b09aa-7726-4259-8e74-549c359f57fb-cni-path\") pod \"cilium-nf65c\" (UID: \"616b09aa-7726-4259-8e74-549c359f57fb\") " pod="kube-system/cilium-nf65c" Jan 29 11:36:33.739120 kubelet[2371]: I0129 11:36:33.738990 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/616b09aa-7726-4259-8e74-549c359f57fb-cilium-cgroup\") pod \"cilium-nf65c\" (UID: \"616b09aa-7726-4259-8e74-549c359f57fb\") " pod="kube-system/cilium-nf65c" Jan 29 11:36:33.739120 kubelet[2371]: I0129 11:36:33.739009 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/616b09aa-7726-4259-8e74-549c359f57fb-lib-modules\") pod \"cilium-nf65c\" (UID: \"616b09aa-7726-4259-8e74-549c359f57fb\") " pod="kube-system/cilium-nf65c" Jan 29 11:36:33.739120 kubelet[2371]: I0129 11:36:33.739030 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/616b09aa-7726-4259-8e74-549c359f57fb-clustermesh-secrets\") pod \"cilium-nf65c\" (UID: \"616b09aa-7726-4259-8e74-549c359f57fb\") " pod="kube-system/cilium-nf65c" Jan 29 11:36:33.739120 kubelet[2371]: I0129 11:36:33.739052 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/616b09aa-7726-4259-8e74-549c359f57fb-host-proc-sys-kernel\") pod \"cilium-nf65c\" (UID: \"616b09aa-7726-4259-8e74-549c359f57fb\") " pod="kube-system/cilium-nf65c" Jan 29 11:36:33.739360 kubelet[2371]: I0129 11:36:33.739073 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/616b09aa-7726-4259-8e74-549c359f57fb-host-proc-sys-net\") pod \"cilium-nf65c\" (UID: \"616b09aa-7726-4259-8e74-549c359f57fb\") " pod="kube-system/cilium-nf65c" Jan 29 11:36:33.739360 kubelet[2371]: I0129 11:36:33.739097 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/616b09aa-7726-4259-8e74-549c359f57fb-etc-cni-netd\") pod \"cilium-nf65c\" (UID: \"616b09aa-7726-4259-8e74-549c359f57fb\") " pod="kube-system/cilium-nf65c" Jan 29 11:36:33.739360 kubelet[2371]: I0129 11:36:33.739121 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/616b09aa-7726-4259-8e74-549c359f57fb-xtables-lock\") pod \"cilium-nf65c\" (UID: \"616b09aa-7726-4259-8e74-549c359f57fb\") " pod="kube-system/cilium-nf65c" Jan 29 11:36:33.739360 kubelet[2371]: I0129 11:36:33.739145 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/616b09aa-7726-4259-8e74-549c359f57fb-cilium-config-path\") pod \"cilium-nf65c\" (UID: \"616b09aa-7726-4259-8e74-549c359f57fb\") " pod="kube-system/cilium-nf65c" Jan 29 11:36:33.739360 kubelet[2371]: I0129 11:36:33.739168 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/616b09aa-7726-4259-8e74-549c359f57fb-cilium-ipsec-secrets\") pod \"cilium-nf65c\" (UID: \"616b09aa-7726-4259-8e74-549c359f57fb\") " pod="kube-system/cilium-nf65c" Jan 29 11:36:33.739360 kubelet[2371]: I0129 11:36:33.739206 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/616b09aa-7726-4259-8e74-549c359f57fb-cilium-run\") pod \"cilium-nf65c\" (UID: \"616b09aa-7726-4259-8e74-549c359f57fb\") " pod="kube-system/cilium-nf65c" Jan 29 11:36:33.739639 kubelet[2371]: I0129 11:36:33.739228 2371 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z52d7\" (UniqueName: \"kubernetes.io/projected/616b09aa-7726-4259-8e74-549c359f57fb-kube-api-access-z52d7\") pod \"cilium-nf65c\" (UID: \"616b09aa-7726-4259-8e74-549c359f57fb\") " pod="kube-system/cilium-nf65c" Jan 29 11:36:33.873251 containerd[1891]: time="2025-01-29T11:36:33.871097616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-dk6qg,Uid:934148ff-2ef6-4d1e-a8f1-de9833cfd594,Namespace:kube-system,Attempt:0,}" Jan 29 11:36:33.930720 containerd[1891]: time="2025-01-29T11:36:33.930593260Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:36:33.930720 containerd[1891]: time="2025-01-29T11:36:33.930647257Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:36:33.930720 containerd[1891]: time="2025-01-29T11:36:33.930661868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:36:33.931036 containerd[1891]: time="2025-01-29T11:36:33.930754821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:36:33.953760 systemd[1]: Started cri-containerd-d3f170adaffabfd96d60298cfd9fb5211c30af42682e5b51b2b02b157af642d9.scope - libcontainer container d3f170adaffabfd96d60298cfd9fb5211c30af42682e5b51b2b02b157af642d9. Jan 29 11:36:33.972046 containerd[1891]: time="2025-01-29T11:36:33.971639661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nf65c,Uid:616b09aa-7726-4259-8e74-549c359f57fb,Namespace:kube-system,Attempt:0,}" Jan 29 11:36:34.051613 containerd[1891]: time="2025-01-29T11:36:34.051299541Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:36:34.051613 containerd[1891]: time="2025-01-29T11:36:34.051394918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:36:34.051613 containerd[1891]: time="2025-01-29T11:36:34.051418729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:36:34.051613 containerd[1891]: time="2025-01-29T11:36:34.051521145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:36:34.063263 containerd[1891]: time="2025-01-29T11:36:34.063165663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-dk6qg,Uid:934148ff-2ef6-4d1e-a8f1-de9833cfd594,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3f170adaffabfd96d60298cfd9fb5211c30af42682e5b51b2b02b157af642d9\"" Jan 29 11:36:34.066360 containerd[1891]: time="2025-01-29T11:36:34.066321733Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 11:36:34.086933 systemd[1]: Started cri-containerd-2479962ee4d9201830c31bc2e7cf0d2475925be3860b0de1c989f3ac5f8fe02b.scope - libcontainer container 2479962ee4d9201830c31bc2e7cf0d2475925be3860b0de1c989f3ac5f8fe02b. Jan 29 11:36:34.118595 containerd[1891]: time="2025-01-29T11:36:34.118514848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nf65c,Uid:616b09aa-7726-4259-8e74-549c359f57fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"2479962ee4d9201830c31bc2e7cf0d2475925be3860b0de1c989f3ac5f8fe02b\"" Jan 29 11:36:34.122448 containerd[1891]: time="2025-01-29T11:36:34.122309613Z" level=info msg="CreateContainer within sandbox \"2479962ee4d9201830c31bc2e7cf0d2475925be3860b0de1c989f3ac5f8fe02b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 11:36:34.151989 containerd[1891]: time="2025-01-29T11:36:34.151857908Z" level=info msg="CreateContainer within sandbox \"2479962ee4d9201830c31bc2e7cf0d2475925be3860b0de1c989f3ac5f8fe02b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"dbd100106ba28c0ce79e8bd6c79d82e1f7c14c8cd3ceceaba6d58f786386703e\"" Jan 29 11:36:34.153423 containerd[1891]: time="2025-01-29T11:36:34.152920765Z" level=info msg="StartContainer for \"dbd100106ba28c0ce79e8bd6c79d82e1f7c14c8cd3ceceaba6d58f786386703e\"" Jan 29 11:36:34.188607 systemd[1]: Started cri-containerd-dbd100106ba28c0ce79e8bd6c79d82e1f7c14c8cd3ceceaba6d58f786386703e.scope - libcontainer container dbd100106ba28c0ce79e8bd6c79d82e1f7c14c8cd3ceceaba6d58f786386703e. Jan 29 11:36:34.273247 containerd[1891]: time="2025-01-29T11:36:34.273090057Z" level=info msg="StartContainer for \"dbd100106ba28c0ce79e8bd6c79d82e1f7c14c8cd3ceceaba6d58f786386703e\" returns successfully" Jan 29 11:36:34.291897 systemd[1]: cri-containerd-dbd100106ba28c0ce79e8bd6c79d82e1f7c14c8cd3ceceaba6d58f786386703e.scope: Deactivated successfully. Jan 29 11:36:34.348810 containerd[1891]: time="2025-01-29T11:36:34.348741898Z" level=info msg="shim disconnected" id=dbd100106ba28c0ce79e8bd6c79d82e1f7c14c8cd3ceceaba6d58f786386703e namespace=k8s.io Jan 29 11:36:34.348810 containerd[1891]: time="2025-01-29T11:36:34.348804187Z" level=warning msg="cleaning up after shim disconnected" id=dbd100106ba28c0ce79e8bd6c79d82e1f7c14c8cd3ceceaba6d58f786386703e namespace=k8s.io Jan 29 11:36:34.348810 containerd[1891]: time="2025-01-29T11:36:34.348815898Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:36:34.694997 kubelet[2371]: E0129 11:36:34.694938 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:34.792402 kubelet[2371]: E0129 11:36:34.790348 2371 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 11:36:35.125410 containerd[1891]: time="2025-01-29T11:36:35.125354352Z" level=info msg="CreateContainer within sandbox \"2479962ee4d9201830c31bc2e7cf0d2475925be3860b0de1c989f3ac5f8fe02b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 11:36:35.146987 containerd[1891]: time="2025-01-29T11:36:35.146939039Z" level=info msg="CreateContainer within sandbox \"2479962ee4d9201830c31bc2e7cf0d2475925be3860b0de1c989f3ac5f8fe02b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"068009cf02bdafdf26d41fbaf5d703f2a5d2675c45ab73a363b15d2b6203890d\"" Jan 29 11:36:35.147742 containerd[1891]: time="2025-01-29T11:36:35.147710914Z" level=info msg="StartContainer for \"068009cf02bdafdf26d41fbaf5d703f2a5d2675c45ab73a363b15d2b6203890d\"" Jan 29 11:36:35.199614 systemd[1]: Started cri-containerd-068009cf02bdafdf26d41fbaf5d703f2a5d2675c45ab73a363b15d2b6203890d.scope - libcontainer container 068009cf02bdafdf26d41fbaf5d703f2a5d2675c45ab73a363b15d2b6203890d. Jan 29 11:36:35.272960 containerd[1891]: time="2025-01-29T11:36:35.272568985Z" level=info msg="StartContainer for \"068009cf02bdafdf26d41fbaf5d703f2a5d2675c45ab73a363b15d2b6203890d\" returns successfully" Jan 29 11:36:35.289470 systemd[1]: cri-containerd-068009cf02bdafdf26d41fbaf5d703f2a5d2675c45ab73a363b15d2b6203890d.scope: Deactivated successfully. Jan 29 11:36:35.351926 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-068009cf02bdafdf26d41fbaf5d703f2a5d2675c45ab73a363b15d2b6203890d-rootfs.mount: Deactivated successfully. Jan 29 11:36:35.401032 containerd[1891]: time="2025-01-29T11:36:35.400807647Z" level=info msg="shim disconnected" id=068009cf02bdafdf26d41fbaf5d703f2a5d2675c45ab73a363b15d2b6203890d namespace=k8s.io Jan 29 11:36:35.401032 containerd[1891]: time="2025-01-29T11:36:35.400882266Z" level=warning msg="cleaning up after shim disconnected" id=068009cf02bdafdf26d41fbaf5d703f2a5d2675c45ab73a363b15d2b6203890d namespace=k8s.io Jan 29 11:36:35.401032 containerd[1891]: time="2025-01-29T11:36:35.400895130Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:36:35.422471 containerd[1891]: time="2025-01-29T11:36:35.422417114Z" level=warning msg="cleanup warnings time=\"2025-01-29T11:36:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 11:36:35.696025 kubelet[2371]: E0129 11:36:35.695786 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:35.950266 kubelet[2371]: I0129 11:36:35.949254 2371 setters.go:600] "Node became not ready" node="172.31.25.200" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-29T11:36:35Z","lastTransitionTime":"2025-01-29T11:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 29 11:36:36.129690 containerd[1891]: time="2025-01-29T11:36:36.129646409Z" level=info msg="CreateContainer within sandbox \"2479962ee4d9201830c31bc2e7cf0d2475925be3860b0de1c989f3ac5f8fe02b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 11:36:36.158472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3647705315.mount: Deactivated successfully. Jan 29 11:36:36.172772 containerd[1891]: time="2025-01-29T11:36:36.172723722Z" level=info msg="CreateContainer within sandbox \"2479962ee4d9201830c31bc2e7cf0d2475925be3860b0de1c989f3ac5f8fe02b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"95e4f7af2699ca5b681abd281eeaafbae86767c4dfa4dba48a381247e9e9f9df\"" Jan 29 11:36:36.174410 containerd[1891]: time="2025-01-29T11:36:36.173445345Z" level=info msg="StartContainer for \"95e4f7af2699ca5b681abd281eeaafbae86767c4dfa4dba48a381247e9e9f9df\"" Jan 29 11:36:36.201866 containerd[1891]: time="2025-01-29T11:36:36.201732562Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:36.211056 containerd[1891]: time="2025-01-29T11:36:36.210588827Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 29 11:36:36.213200 containerd[1891]: time="2025-01-29T11:36:36.213160862Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:36:36.221943 containerd[1891]: time="2025-01-29T11:36:36.221887913Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.155354528s" Jan 29 11:36:36.222800 containerd[1891]: time="2025-01-29T11:36:36.222768981Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 29 11:36:36.234238 containerd[1891]: time="2025-01-29T11:36:36.232432451Z" level=info msg="CreateContainer within sandbox \"d3f170adaffabfd96d60298cfd9fb5211c30af42682e5b51b2b02b157af642d9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 11:36:36.236273 systemd[1]: Started cri-containerd-95e4f7af2699ca5b681abd281eeaafbae86767c4dfa4dba48a381247e9e9f9df.scope - libcontainer container 95e4f7af2699ca5b681abd281eeaafbae86767c4dfa4dba48a381247e9e9f9df. Jan 29 11:36:36.260651 containerd[1891]: time="2025-01-29T11:36:36.260613070Z" level=info msg="CreateContainer within sandbox \"d3f170adaffabfd96d60298cfd9fb5211c30af42682e5b51b2b02b157af642d9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a2284c9a094147d0ad70193aff49de4ce2041f664b10c05e70eb61cccd3a8047\"" Jan 29 11:36:36.276645 containerd[1891]: time="2025-01-29T11:36:36.276607366Z" level=info msg="StartContainer for \"a2284c9a094147d0ad70193aff49de4ce2041f664b10c05e70eb61cccd3a8047\"" Jan 29 11:36:36.301499 containerd[1891]: time="2025-01-29T11:36:36.301446348Z" level=info msg="StartContainer for \"95e4f7af2699ca5b681abd281eeaafbae86767c4dfa4dba48a381247e9e9f9df\" returns successfully" Jan 29 11:36:36.315929 systemd[1]: cri-containerd-95e4f7af2699ca5b681abd281eeaafbae86767c4dfa4dba48a381247e9e9f9df.scope: Deactivated successfully. Jan 29 11:36:36.335475 systemd[1]: Started cri-containerd-a2284c9a094147d0ad70193aff49de4ce2041f664b10c05e70eb61cccd3a8047.scope - libcontainer container a2284c9a094147d0ad70193aff49de4ce2041f664b10c05e70eb61cccd3a8047. Jan 29 11:36:36.390570 containerd[1891]: time="2025-01-29T11:36:36.390519130Z" level=info msg="StartContainer for \"a2284c9a094147d0ad70193aff49de4ce2041f664b10c05e70eb61cccd3a8047\" returns successfully" Jan 29 11:36:36.507891 containerd[1891]: time="2025-01-29T11:36:36.507659786Z" level=info msg="shim disconnected" id=95e4f7af2699ca5b681abd281eeaafbae86767c4dfa4dba48a381247e9e9f9df namespace=k8s.io Jan 29 11:36:36.507891 containerd[1891]: time="2025-01-29T11:36:36.507729924Z" level=warning msg="cleaning up after shim disconnected" id=95e4f7af2699ca5b681abd281eeaafbae86767c4dfa4dba48a381247e9e9f9df namespace=k8s.io Jan 29 11:36:36.507891 containerd[1891]: time="2025-01-29T11:36:36.507743167Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:36:36.697094 kubelet[2371]: E0129 11:36:36.697009 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:36.782481 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95e4f7af2699ca5b681abd281eeaafbae86767c4dfa4dba48a381247e9e9f9df-rootfs.mount: Deactivated successfully. Jan 29 11:36:37.197175 kubelet[2371]: I0129 11:36:37.196751 2371 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-dk6qg" podStartSLOduration=2.035038625 podStartE2EDuration="4.196727585s" podCreationTimestamp="2025-01-29 11:36:33 +0000 UTC" firstStartedPulling="2025-01-29 11:36:34.065526517 +0000 UTC m=+70.188622278" lastFinishedPulling="2025-01-29 11:36:36.227215478 +0000 UTC m=+72.350311238" observedRunningTime="2025-01-29 11:36:37.161018983 +0000 UTC m=+73.284114758" watchObservedRunningTime="2025-01-29 11:36:37.196727585 +0000 UTC m=+73.319823389" Jan 29 11:36:37.205147 containerd[1891]: time="2025-01-29T11:36:37.204980100Z" level=info msg="CreateContainer within sandbox \"2479962ee4d9201830c31bc2e7cf0d2475925be3860b0de1c989f3ac5f8fe02b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 11:36:37.230284 containerd[1891]: time="2025-01-29T11:36:37.230246565Z" level=info msg="CreateContainer within sandbox \"2479962ee4d9201830c31bc2e7cf0d2475925be3860b0de1c989f3ac5f8fe02b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a1bef3da6d55efcdb0e9bfd1b573849a8faa7669f31814c3f24ce64a65d92cdd\"" Jan 29 11:36:37.231129 containerd[1891]: time="2025-01-29T11:36:37.231041131Z" level=info msg="StartContainer for \"a1bef3da6d55efcdb0e9bfd1b573849a8faa7669f31814c3f24ce64a65d92cdd\"" Jan 29 11:36:37.284709 systemd[1]: Started cri-containerd-a1bef3da6d55efcdb0e9bfd1b573849a8faa7669f31814c3f24ce64a65d92cdd.scope - libcontainer container a1bef3da6d55efcdb0e9bfd1b573849a8faa7669f31814c3f24ce64a65d92cdd. Jan 29 11:36:37.316681 systemd[1]: cri-containerd-a1bef3da6d55efcdb0e9bfd1b573849a8faa7669f31814c3f24ce64a65d92cdd.scope: Deactivated successfully. Jan 29 11:36:37.319676 containerd[1891]: time="2025-01-29T11:36:37.319478399Z" level=info msg="StartContainer for \"a1bef3da6d55efcdb0e9bfd1b573849a8faa7669f31814c3f24ce64a65d92cdd\" returns successfully" Jan 29 11:36:37.361549 containerd[1891]: time="2025-01-29T11:36:37.361480375Z" level=info msg="shim disconnected" id=a1bef3da6d55efcdb0e9bfd1b573849a8faa7669f31814c3f24ce64a65d92cdd namespace=k8s.io Jan 29 11:36:37.362036 containerd[1891]: time="2025-01-29T11:36:37.361559610Z" level=warning msg="cleaning up after shim disconnected" id=a1bef3da6d55efcdb0e9bfd1b573849a8faa7669f31814c3f24ce64a65d92cdd namespace=k8s.io Jan 29 11:36:37.362036 containerd[1891]: time="2025-01-29T11:36:37.361573277Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:36:37.697503 kubelet[2371]: E0129 11:36:37.697455 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:37.780793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1bef3da6d55efcdb0e9bfd1b573849a8faa7669f31814c3f24ce64a65d92cdd-rootfs.mount: Deactivated successfully. Jan 29 11:36:38.153634 containerd[1891]: time="2025-01-29T11:36:38.153592269Z" level=info msg="CreateContainer within sandbox \"2479962ee4d9201830c31bc2e7cf0d2475925be3860b0de1c989f3ac5f8fe02b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 11:36:38.207173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1854749266.mount: Deactivated successfully. Jan 29 11:36:38.208605 containerd[1891]: time="2025-01-29T11:36:38.208558323Z" level=info msg="CreateContainer within sandbox \"2479962ee4d9201830c31bc2e7cf0d2475925be3860b0de1c989f3ac5f8fe02b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4121a74d65eb39364cd6a81231164ce15756ef377ffeb318cab459f7ad72a462\"" Jan 29 11:36:38.210105 containerd[1891]: time="2025-01-29T11:36:38.210069813Z" level=info msg="StartContainer for \"4121a74d65eb39364cd6a81231164ce15756ef377ffeb318cab459f7ad72a462\"" Jan 29 11:36:38.273099 systemd[1]: Started cri-containerd-4121a74d65eb39364cd6a81231164ce15756ef377ffeb318cab459f7ad72a462.scope - libcontainer container 4121a74d65eb39364cd6a81231164ce15756ef377ffeb318cab459f7ad72a462. Jan 29 11:36:38.393216 containerd[1891]: time="2025-01-29T11:36:38.391430548Z" level=info msg="StartContainer for \"4121a74d65eb39364cd6a81231164ce15756ef377ffeb318cab459f7ad72a462\" returns successfully" Jan 29 11:36:38.698442 kubelet[2371]: E0129 11:36:38.698363 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:38.780903 systemd[1]: run-containerd-runc-k8s.io-4121a74d65eb39364cd6a81231164ce15756ef377ffeb318cab459f7ad72a462-runc.fySyuw.mount: Deactivated successfully. Jan 29 11:36:39.007648 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 29 11:36:39.698934 kubelet[2371]: E0129 11:36:39.698877 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:40.699683 kubelet[2371]: E0129 11:36:40.699623 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:41.701209 kubelet[2371]: E0129 11:36:41.701130 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:42.698902 systemd-networkd[1734]: lxc_health: Link UP Jan 29 11:36:42.702402 kubelet[2371]: E0129 11:36:42.701530 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:42.704785 (udev-worker)[5172]: Network interface NamePolicy= disabled on kernel command line. Jan 29 11:36:42.711622 systemd-networkd[1734]: lxc_health: Gained carrier Jan 29 11:36:43.702722 kubelet[2371]: E0129 11:36:43.702622 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:44.020684 kubelet[2371]: I0129 11:36:44.020247 2371 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nf65c" podStartSLOduration=11.020226032 podStartE2EDuration="11.020226032s" podCreationTimestamp="2025-01-29 11:36:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:36:39.223230973 +0000 UTC m=+75.346326751" watchObservedRunningTime="2025-01-29 11:36:44.020226032 +0000 UTC m=+80.143321808" Jan 29 11:36:44.249646 systemd-networkd[1734]: lxc_health: Gained IPv6LL Jan 29 11:36:44.633485 kubelet[2371]: E0129 11:36:44.633399 2371 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:44.703572 kubelet[2371]: E0129 11:36:44.703484 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:45.704033 kubelet[2371]: E0129 11:36:45.703974 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:46.704169 kubelet[2371]: E0129 11:36:46.704121 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:46.999069 ntpd[1862]: Listen normally on 15 lxc_health [fe80::c8b3:88ff:fe02:5dcf%15]:123 Jan 29 11:36:47.000329 ntpd[1862]: 29 Jan 11:36:46 ntpd[1862]: Listen normally on 15 lxc_health [fe80::c8b3:88ff:fe02:5dcf%15]:123 Jan 29 11:36:47.705578 kubelet[2371]: E0129 11:36:47.705502 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:48.112658 systemd[1]: run-containerd-runc-k8s.io-4121a74d65eb39364cd6a81231164ce15756ef377ffeb318cab459f7ad72a462-runc.esdpS8.mount: Deactivated successfully. Jan 29 11:36:48.706142 kubelet[2371]: E0129 11:36:48.706091 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:49.707195 kubelet[2371]: E0129 11:36:49.707138 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:50.707837 kubelet[2371]: E0129 11:36:50.707792 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:51.708457 kubelet[2371]: E0129 11:36:51.708372 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:52.708904 kubelet[2371]: E0129 11:36:52.708850 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:53.709735 kubelet[2371]: E0129 11:36:53.709673 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:54.710761 kubelet[2371]: E0129 11:36:54.710602 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:55.711756 kubelet[2371]: E0129 11:36:55.711700 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:56.712256 kubelet[2371]: E0129 11:36:56.712204 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:57.713406 kubelet[2371]: E0129 11:36:57.713339 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:58.714529 kubelet[2371]: E0129 11:36:58.714473 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:36:59.715675 kubelet[2371]: E0129 11:36:59.715620 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:00.716099 kubelet[2371]: E0129 11:37:00.716047 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:01.716634 kubelet[2371]: E0129 11:37:01.716576 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:02.717344 kubelet[2371]: E0129 11:37:02.717284 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:03.717524 kubelet[2371]: E0129 11:37:03.717465 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:04.629312 kubelet[2371]: E0129 11:37:04.629257 2371 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:04.718477 kubelet[2371]: E0129 11:37:04.718421 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:05.719139 kubelet[2371]: E0129 11:37:05.719084 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:06.398881 kubelet[2371]: E0129 11:37:06.398818 2371 kubelet_node_status.go:535] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-01-29T11:36:56Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-01-29T11:36:56Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-01-29T11:36:56Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-01-29T11:36:56Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"],\\\"sizeBytes\\\":166719855},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":91036984},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":71015439},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\\\",\\\"registry.k8s.io/kube-proxy:v1.31.5\\\"],\\\"sizeBytes\\\":30230147},{\\\"names\\\":[\\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"],\\\"sizeBytes\\\":18897442},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":311286}]}}\" for node \"172.31.25.200\": Patch \"https://172.31.25.7:6443/api/v1/nodes/172.31.25.200/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:37:06.720370 kubelet[2371]: E0129 11:37:06.720212 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:07.137893 kubelet[2371]: E0129 11:37:07.137834 2371 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.25.200?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:37:07.721109 kubelet[2371]: E0129 11:37:07.721048 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:08.721511 kubelet[2371]: E0129 11:37:08.721458 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:09.722457 kubelet[2371]: E0129 11:37:09.722401 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:10.723231 kubelet[2371]: E0129 11:37:10.723145 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:11.724331 kubelet[2371]: E0129 11:37:11.724277 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:12.724567 kubelet[2371]: E0129 11:37:12.724509 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:13.725506 kubelet[2371]: E0129 11:37:13.725443 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:14.725626 kubelet[2371]: E0129 11:37:14.725566 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:15.726802 kubelet[2371]: E0129 11:37:15.726679 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:16.399572 kubelet[2371]: E0129 11:37:16.399521 2371 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.25.200\": Get \"https://172.31.25.7:6443/api/v1/nodes/172.31.25.200?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:37:16.727053 kubelet[2371]: E0129 11:37:16.726933 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:17.138566 kubelet[2371]: E0129 11:37:17.138509 2371 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.25.200?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:37:17.727805 kubelet[2371]: E0129 11:37:17.727704 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:18.728691 kubelet[2371]: E0129 11:37:18.728638 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:19.729634 kubelet[2371]: E0129 11:37:19.729578 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:20.730419 kubelet[2371]: E0129 11:37:20.730333 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:21.730756 kubelet[2371]: E0129 11:37:21.730699 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:22.731537 kubelet[2371]: E0129 11:37:22.731478 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:23.732715 kubelet[2371]: E0129 11:37:23.732652 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:24.629416 kubelet[2371]: E0129 11:37:24.629344 2371 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:24.674007 containerd[1891]: time="2025-01-29T11:37:24.673851612Z" level=info msg="StopPodSandbox for \"05e8268e00942ac972a3a11a0a4fbea2cb2cd9cf43ad6c0b91559961961ac32d\"" Jan 29 11:37:24.674007 containerd[1891]: time="2025-01-29T11:37:24.673967081Z" level=info msg="TearDown network for sandbox \"05e8268e00942ac972a3a11a0a4fbea2cb2cd9cf43ad6c0b91559961961ac32d\" successfully" Jan 29 11:37:24.674007 containerd[1891]: time="2025-01-29T11:37:24.673984019Z" level=info msg="StopPodSandbox for \"05e8268e00942ac972a3a11a0a4fbea2cb2cd9cf43ad6c0b91559961961ac32d\" returns successfully" Jan 29 11:37:24.674677 containerd[1891]: time="2025-01-29T11:37:24.674643275Z" level=info msg="RemovePodSandbox for \"05e8268e00942ac972a3a11a0a4fbea2cb2cd9cf43ad6c0b91559961961ac32d\"" Jan 29 11:37:24.674729 containerd[1891]: time="2025-01-29T11:37:24.674675642Z" level=info msg="Forcibly stopping sandbox \"05e8268e00942ac972a3a11a0a4fbea2cb2cd9cf43ad6c0b91559961961ac32d\"" Jan 29 11:37:24.674863 containerd[1891]: time="2025-01-29T11:37:24.674738991Z" level=info msg="TearDown network for sandbox \"05e8268e00942ac972a3a11a0a4fbea2cb2cd9cf43ad6c0b91559961961ac32d\" successfully" Jan 29 11:37:24.685749 containerd[1891]: time="2025-01-29T11:37:24.685682200Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"05e8268e00942ac972a3a11a0a4fbea2cb2cd9cf43ad6c0b91559961961ac32d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:37:24.685910 containerd[1891]: time="2025-01-29T11:37:24.685784221Z" level=info msg="RemovePodSandbox \"05e8268e00942ac972a3a11a0a4fbea2cb2cd9cf43ad6c0b91559961961ac32d\" returns successfully" Jan 29 11:37:24.732933 kubelet[2371]: E0129 11:37:24.732891 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:25.733942 kubelet[2371]: E0129 11:37:25.733885 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:26.400143 kubelet[2371]: E0129 11:37:26.400092 2371 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.25.200\": Get \"https://172.31.25.7:6443/api/v1/nodes/172.31.25.200?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:37:26.735594 kubelet[2371]: E0129 11:37:26.734999 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:27.139447 kubelet[2371]: E0129 11:37:27.139359 2371 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.25.200?timeout=10s\": context deadline exceeded" Jan 29 11:37:27.735532 kubelet[2371]: E0129 11:37:27.735477 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:28.736057 kubelet[2371]: E0129 11:37:28.735996 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:29.736245 kubelet[2371]: E0129 11:37:29.736185 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:30.736977 kubelet[2371]: E0129 11:37:30.736917 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:31.738252 kubelet[2371]: E0129 11:37:31.738191 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:32.738644 kubelet[2371]: E0129 11:37:32.738583 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:33.739357 kubelet[2371]: E0129 11:37:33.739309 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:34.740020 kubelet[2371]: E0129 11:37:34.739962 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:35.741068 kubelet[2371]: E0129 11:37:35.741011 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:36.401258 kubelet[2371]: E0129 11:37:36.401213 2371 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.25.200\": Get \"https://172.31.25.7:6443/api/v1/nodes/172.31.25.200?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:37:36.473291 kubelet[2371]: E0129 11:37:36.473229 2371 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.25.200?timeout=10s\": read tcp 172.31.25.200:37266->172.31.25.7:6443: read: connection reset by peer" Jan 29 11:37:36.484141 kubelet[2371]: E0129 11:37:36.484060 2371 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.25.200?timeout=10s\": write tcp 172.31.25.200:37646->172.31.25.7:6443: write: connection reset by peer" Jan 29 11:37:36.484141 kubelet[2371]: I0129 11:37:36.484140 2371 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 29 11:37:36.484740 kubelet[2371]: E0129 11:37:36.484703 2371 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.25.200?timeout=10s\": dial tcp 172.31.25.7:6443: connect: connection refused" interval="200ms" Jan 29 11:37:36.686467 kubelet[2371]: E0129 11:37:36.686306 2371 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.25.200?timeout=10s\": dial tcp 172.31.25.7:6443: connect: connection refused" interval="400ms" Jan 29 11:37:36.741897 kubelet[2371]: E0129 11:37:36.741810 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:37.087663 kubelet[2371]: E0129 11:37:37.087608 2371 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.25.200?timeout=10s\": dial tcp 172.31.25.7:6443: connect: connection refused" interval="800ms" Jan 29 11:37:37.469418 kubelet[2371]: E0129 11:37:37.469269 2371 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.25.200\": Get \"https://172.31.25.7:6443/api/v1/nodes/172.31.25.200?timeout=10s\": dial tcp 172.31.25.7:6443: connect: connection refused - error from a previous attempt: read tcp 172.31.25.200:37266->172.31.25.7:6443: read: connection reset by peer" Jan 29 11:37:37.469418 kubelet[2371]: E0129 11:37:37.469301 2371 kubelet_node_status.go:522] "Unable to update node status" err="update node status exceeds retry count" Jan 29 11:37:37.742902 kubelet[2371]: E0129 11:37:37.742767 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:38.743214 kubelet[2371]: E0129 11:37:38.743157 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:39.744289 kubelet[2371]: E0129 11:37:39.744232 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:40.744466 kubelet[2371]: E0129 11:37:40.744409 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:41.745542 kubelet[2371]: E0129 11:37:41.745483 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:42.745746 kubelet[2371]: E0129 11:37:42.745686 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:43.746084 kubelet[2371]: E0129 11:37:43.746026 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:44.629726 kubelet[2371]: E0129 11:37:44.629665 2371 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:44.746649 kubelet[2371]: E0129 11:37:44.746605 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:45.747495 kubelet[2371]: E0129 11:37:45.747430 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:46.748229 kubelet[2371]: E0129 11:37:46.748171 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:47.748696 kubelet[2371]: E0129 11:37:47.748640 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:47.888628 kubelet[2371]: E0129 11:37:47.888561 2371 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.25.200?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Jan 29 11:37:48.749547 kubelet[2371]: E0129 11:37:48.749498 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:49.749748 kubelet[2371]: E0129 11:37:49.749686 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:50.749943 kubelet[2371]: E0129 11:37:50.749881 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:51.750855 kubelet[2371]: E0129 11:37:51.750794 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:52.751920 kubelet[2371]: E0129 11:37:52.751866 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:53.752238 kubelet[2371]: E0129 11:37:53.752175 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:37:54.753216 kubelet[2371]: E0129 11:37:54.752908 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"