Jan 17 12:16:49.002856 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 12:16:49.002892 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:16:49.002906 kernel: BIOS-provided physical RAM map: Jan 17 12:16:49.002916 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 17 12:16:49.006155 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 17 12:16:49.006244 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 17 12:16:49.006269 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jan 17 12:16:49.006282 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jan 17 12:16:49.006295 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jan 17 12:16:49.006337 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 17 12:16:49.006350 kernel: NX (Execute Disable) protection: active Jan 17 12:16:49.006362 kernel: APIC: Static calls initialized Jan 17 12:16:49.006373 kernel: SMBIOS 2.7 present. Jan 17 12:16:49.006412 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 17 12:16:49.006431 kernel: Hypervisor detected: KVM Jan 17 12:16:49.006445 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 12:16:49.006460 kernel: kvm-clock: using sched offset of 6035746560 cycles Jan 17 12:16:49.006501 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 12:16:49.006516 kernel: tsc: Detected 2500.004 MHz processor Jan 17 12:16:49.010170 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 12:16:49.010205 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 12:16:49.010226 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jan 17 12:16:49.010242 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 17 12:16:49.010256 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 12:16:49.010271 kernel: Using GB pages for direct mapping Jan 17 12:16:49.010285 kernel: ACPI: Early table checksum verification disabled Jan 17 12:16:49.010300 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jan 17 12:16:49.010314 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jan 17 12:16:49.010329 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 17 12:16:49.010343 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 17 12:16:49.010362 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jan 17 12:16:49.010376 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 17 12:16:49.010391 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 17 12:16:49.010405 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 17 12:16:49.010420 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 17 12:16:49.010434 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 17 12:16:49.010449 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 17 12:16:49.010464 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 17 12:16:49.010478 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jan 17 12:16:49.010497 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jan 17 12:16:49.010517 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jan 17 12:16:49.013301 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jan 17 12:16:49.013333 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jan 17 12:16:49.013350 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jan 17 12:16:49.013372 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jan 17 12:16:49.013387 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jan 17 12:16:49.013402 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jan 17 12:16:49.013418 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jan 17 12:16:49.013434 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 12:16:49.013449 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 12:16:49.013465 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 17 12:16:49.013481 kernel: NUMA: Initialized distance table, cnt=1 Jan 17 12:16:49.013496 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jan 17 12:16:49.013515 kernel: Zone ranges: Jan 17 12:16:49.013530 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 12:16:49.013545 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jan 17 12:16:49.013561 kernel: Normal empty Jan 17 12:16:49.013577 kernel: Movable zone start for each node Jan 17 12:16:49.013592 kernel: Early memory node ranges Jan 17 12:16:49.013608 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 17 12:16:49.013623 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jan 17 12:16:49.013639 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jan 17 12:16:49.013658 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:16:49.013673 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 17 12:16:49.013689 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jan 17 12:16:49.013704 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 17 12:16:49.013720 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 12:16:49.013736 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 17 12:16:49.013751 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 12:16:49.013832 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 12:16:49.013850 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 12:16:49.013866 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 12:16:49.013885 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 12:16:49.013901 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 12:16:49.013916 kernel: TSC deadline timer available Jan 17 12:16:49.013932 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 12:16:49.013947 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 12:16:49.013963 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 17 12:16:49.013979 kernel: Booting paravirtualized kernel on KVM Jan 17 12:16:49.013995 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 12:16:49.014011 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 12:16:49.014029 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 17 12:16:49.014045 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 17 12:16:49.018302 kernel: pcpu-alloc: [0] 0 1 Jan 17 12:16:49.018330 kernel: kvm-guest: PV spinlocks enabled Jan 17 12:16:49.018348 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 12:16:49.018368 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:16:49.018385 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:16:49.018400 kernel: random: crng init done Jan 17 12:16:49.018422 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:16:49.018437 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 12:16:49.018453 kernel: Fallback order for Node 0: 0 Jan 17 12:16:49.018469 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jan 17 12:16:49.018485 kernel: Policy zone: DMA32 Jan 17 12:16:49.018501 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:16:49.018517 kernel: Memory: 1932344K/2057760K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 125156K reserved, 0K cma-reserved) Jan 17 12:16:49.018738 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 12:16:49.018757 kernel: Kernel/User page tables isolation: enabled Jan 17 12:16:49.018887 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 12:16:49.018941 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 12:16:49.018957 kernel: Dynamic Preempt: voluntary Jan 17 12:16:49.019003 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:16:49.019020 kernel: rcu: RCU event tracing is enabled. Jan 17 12:16:49.019035 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 12:16:49.019051 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:16:49.019066 kernel: Rude variant of Tasks RCU enabled. Jan 17 12:16:49.019080 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:16:49.019101 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:16:49.019116 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 12:16:49.019131 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 12:16:49.019147 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:16:49.019161 kernel: Console: colour VGA+ 80x25 Jan 17 12:16:49.019189 kernel: printk: console [ttyS0] enabled Jan 17 12:16:49.020497 kernel: ACPI: Core revision 20230628 Jan 17 12:16:49.020522 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 17 12:16:49.020540 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 12:16:49.020562 kernel: x2apic enabled Jan 17 12:16:49.020578 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 12:16:49.020606 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Jan 17 12:16:49.020625 kernel: Calibrating delay loop (skipped) preset value.. 5000.00 BogoMIPS (lpj=2500004) Jan 17 12:16:49.020752 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 17 12:16:49.020770 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 17 12:16:49.020786 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 12:16:49.020803 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 12:16:49.020819 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 12:16:49.020836 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 17 12:16:49.020852 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 17 12:16:49.020868 kernel: RETBleed: Vulnerable Jan 17 12:16:49.020889 kernel: Speculative Store Bypass: Vulnerable Jan 17 12:16:49.020906 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 12:16:49.020922 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 12:16:49.020938 kernel: GDS: Unknown: Dependent on hypervisor status Jan 17 12:16:49.020954 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 12:16:49.020971 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 12:16:49.020988 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 12:16:49.021008 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 17 12:16:49.021024 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 17 12:16:49.021040 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 17 12:16:49.021056 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 17 12:16:49.021073 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 17 12:16:49.021090 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 17 12:16:49.021106 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 12:16:49.021123 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 17 12:16:49.021138 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 17 12:16:49.021155 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 17 12:16:49.021171 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 17 12:16:49.023557 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 17 12:16:49.023582 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 17 12:16:49.023599 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 17 12:16:49.023616 kernel: Freeing SMP alternatives memory: 32K Jan 17 12:16:49.023632 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:16:49.023648 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:16:49.023664 kernel: landlock: Up and running. Jan 17 12:16:49.023680 kernel: SELinux: Initializing. Jan 17 12:16:49.023760 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 12:16:49.023778 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 12:16:49.023795 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 17 12:16:49.023819 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:16:49.023836 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:16:49.023852 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:16:49.023868 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 17 12:16:49.023885 kernel: signal: max sigframe size: 3632 Jan 17 12:16:49.023901 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:16:49.023918 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:16:49.023934 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 12:16:49.023950 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:16:49.023970 kernel: smpboot: x86: Booting SMP configuration: Jan 17 12:16:49.023986 kernel: .... node #0, CPUs: #1 Jan 17 12:16:49.024003 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 17 12:16:49.024020 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 17 12:16:49.024036 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 12:16:49.024052 kernel: smpboot: Max logical packages: 1 Jan 17 12:16:49.024068 kernel: smpboot: Total of 2 processors activated (10000.01 BogoMIPS) Jan 17 12:16:49.024084 kernel: devtmpfs: initialized Jan 17 12:16:49.024103 kernel: x86/mm: Memory block size: 128MB Jan 17 12:16:49.024120 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:16:49.024136 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 12:16:49.024152 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:16:49.024167 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:16:49.026232 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:16:49.026259 kernel: audit: type=2000 audit(1737116208.019:1): state=initialized audit_enabled=0 res=1 Jan 17 12:16:49.026276 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:16:49.026293 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 12:16:49.026314 kernel: cpuidle: using governor menu Jan 17 12:16:49.026331 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:16:49.026347 kernel: dca service started, version 1.12.1 Jan 17 12:16:49.026364 kernel: PCI: Using configuration type 1 for base access Jan 17 12:16:49.026380 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 12:16:49.026396 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:16:49.026413 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:16:49.026429 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:16:49.026445 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:16:49.026463 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:16:49.026479 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:16:49.026496 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:16:49.026512 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:16:49.026528 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 17 12:16:49.026544 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 12:16:49.026560 kernel: ACPI: Interpreter enabled Jan 17 12:16:49.026576 kernel: ACPI: PM: (supports S0 S5) Jan 17 12:16:49.026592 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 12:16:49.026608 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 12:16:49.026628 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 12:16:49.026644 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 17 12:16:49.026660 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 12:16:49.026899 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:16:49.027046 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 17 12:16:49.027452 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 17 12:16:49.027479 kernel: acpiphp: Slot [3] registered Jan 17 12:16:49.027502 kernel: acpiphp: Slot [4] registered Jan 17 12:16:49.027518 kernel: acpiphp: Slot [5] registered Jan 17 12:16:49.027534 kernel: acpiphp: Slot [6] registered Jan 17 12:16:49.027550 kernel: acpiphp: Slot [7] registered Jan 17 12:16:49.027566 kernel: acpiphp: Slot [8] registered Jan 17 12:16:49.027583 kernel: acpiphp: Slot [9] registered Jan 17 12:16:49.027599 kernel: acpiphp: Slot [10] registered Jan 17 12:16:49.027615 kernel: acpiphp: Slot [11] registered Jan 17 12:16:49.027631 kernel: acpiphp: Slot [12] registered Jan 17 12:16:49.027649 kernel: acpiphp: Slot [13] registered Jan 17 12:16:49.027666 kernel: acpiphp: Slot [14] registered Jan 17 12:16:49.027681 kernel: acpiphp: Slot [15] registered Jan 17 12:16:49.027697 kernel: acpiphp: Slot [16] registered Jan 17 12:16:49.027714 kernel: acpiphp: Slot [17] registered Jan 17 12:16:49.027729 kernel: acpiphp: Slot [18] registered Jan 17 12:16:49.027745 kernel: acpiphp: Slot [19] registered Jan 17 12:16:49.027761 kernel: acpiphp: Slot [20] registered Jan 17 12:16:49.027777 kernel: acpiphp: Slot [21] registered Jan 17 12:16:49.027792 kernel: acpiphp: Slot [22] registered Jan 17 12:16:49.027811 kernel: acpiphp: Slot [23] registered Jan 17 12:16:49.027827 kernel: acpiphp: Slot [24] registered Jan 17 12:16:49.027843 kernel: acpiphp: Slot [25] registered Jan 17 12:16:49.027859 kernel: acpiphp: Slot [26] registered Jan 17 12:16:49.027874 kernel: acpiphp: Slot [27] registered Jan 17 12:16:49.027890 kernel: acpiphp: Slot [28] registered Jan 17 12:16:49.027905 kernel: acpiphp: Slot [29] registered Jan 17 12:16:49.027921 kernel: acpiphp: Slot [30] registered Jan 17 12:16:49.027937 kernel: acpiphp: Slot [31] registered Jan 17 12:16:49.027956 kernel: PCI host bridge to bus 0000:00 Jan 17 12:16:49.028098 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 12:16:49.032347 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 12:16:49.032527 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 12:16:49.032652 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 17 12:16:49.032768 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 12:16:49.032925 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 17 12:16:49.034273 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 17 12:16:49.034676 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 17 12:16:49.034837 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 17 12:16:49.034972 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jan 17 12:16:49.035108 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 17 12:16:49.036310 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 17 12:16:49.036455 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 17 12:16:49.036701 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 17 12:16:49.036973 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 17 12:16:49.037107 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 17 12:16:49.038746 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 17 12:16:49.038939 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jan 17 12:16:49.039157 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 17 12:16:49.039616 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 12:16:49.039915 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 17 12:16:49.040274 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jan 17 12:16:49.040442 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 17 12:16:49.040580 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jan 17 12:16:49.040602 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 12:16:49.040620 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 12:16:49.040642 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 12:16:49.040659 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 12:16:49.040676 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 17 12:16:49.040693 kernel: iommu: Default domain type: Translated Jan 17 12:16:49.040710 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 12:16:49.040727 kernel: PCI: Using ACPI for IRQ routing Jan 17 12:16:49.040744 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 12:16:49.040761 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 17 12:16:49.040815 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jan 17 12:16:49.041008 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 17 12:16:49.043336 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 17 12:16:49.043490 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 12:16:49.043578 kernel: vgaarb: loaded Jan 17 12:16:49.043597 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 17 12:16:49.043614 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 17 12:16:49.043630 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 12:16:49.043647 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:16:49.043664 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:16:49.043687 kernel: pnp: PnP ACPI init Jan 17 12:16:49.043704 kernel: pnp: PnP ACPI: found 5 devices Jan 17 12:16:49.043721 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 12:16:49.043738 kernel: NET: Registered PF_INET protocol family Jan 17 12:16:49.043756 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:16:49.043773 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 17 12:16:49.043790 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:16:49.043807 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 12:16:49.043824 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 17 12:16:49.043843 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 17 12:16:49.043860 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 12:16:49.043877 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 12:16:49.043894 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:16:49.043911 kernel: NET: Registered PF_XDP protocol family Jan 17 12:16:49.044045 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 12:16:49.044168 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 12:16:49.046369 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 12:16:49.046503 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 17 12:16:49.046645 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 12:16:49.046667 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:16:49.046685 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 12:16:49.046702 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Jan 17 12:16:49.046719 kernel: clocksource: Switched to clocksource tsc Jan 17 12:16:49.046736 kernel: Initialise system trusted keyrings Jan 17 12:16:49.046752 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 17 12:16:49.046773 kernel: Key type asymmetric registered Jan 17 12:16:49.046789 kernel: Asymmetric key parser 'x509' registered Jan 17 12:16:49.046806 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 12:16:49.046822 kernel: io scheduler mq-deadline registered Jan 17 12:16:49.046839 kernel: io scheduler kyber registered Jan 17 12:16:49.046856 kernel: io scheduler bfq registered Jan 17 12:16:49.046873 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 12:16:49.046889 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:16:49.046906 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 12:16:49.046926 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 12:16:49.046942 kernel: i8042: Warning: Keylock active Jan 17 12:16:49.046958 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 12:16:49.046975 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 12:16:49.047240 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 17 12:16:49.047384 kernel: rtc_cmos 00:00: registered as rtc0 Jan 17 12:16:49.047583 kernel: rtc_cmos 00:00: setting system clock to 2025-01-17T12:16:48 UTC (1737116208) Jan 17 12:16:49.047711 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 17 12:16:49.047737 kernel: intel_pstate: CPU model not supported Jan 17 12:16:49.047755 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:16:49.047772 kernel: Segment Routing with IPv6 Jan 17 12:16:49.047788 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:16:49.047805 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:16:49.047822 kernel: Key type dns_resolver registered Jan 17 12:16:49.047838 kernel: IPI shorthand broadcast: enabled Jan 17 12:16:49.047855 kernel: sched_clock: Marking stable (579001634, 235756898)->(888709392, -73950860) Jan 17 12:16:49.047872 kernel: registered taskstats version 1 Jan 17 12:16:49.047891 kernel: Loading compiled-in X.509 certificates Jan 17 12:16:49.047908 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 12:16:49.047924 kernel: Key type .fscrypt registered Jan 17 12:16:49.047940 kernel: Key type fscrypt-provisioning registered Jan 17 12:16:49.047957 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:16:49.047974 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:16:49.047990 kernel: ima: No architecture policies found Jan 17 12:16:49.048007 kernel: clk: Disabling unused clocks Jan 17 12:16:49.048024 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 12:16:49.048043 kernel: Write protecting the kernel read-only data: 36864k Jan 17 12:16:49.048060 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 12:16:49.048076 kernel: Run /init as init process Jan 17 12:16:49.048093 kernel: with arguments: Jan 17 12:16:49.048109 kernel: /init Jan 17 12:16:49.048125 kernel: with environment: Jan 17 12:16:49.048141 kernel: HOME=/ Jan 17 12:16:49.048157 kernel: TERM=linux Jan 17 12:16:49.048173 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:16:49.048211 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:16:49.048245 systemd[1]: Detected virtualization amazon. Jan 17 12:16:49.048267 systemd[1]: Detected architecture x86-64. Jan 17 12:16:49.048284 systemd[1]: Running in initrd. Jan 17 12:16:49.048309 systemd[1]: No hostname configured, using default hostname. Jan 17 12:16:49.048330 systemd[1]: Hostname set to . Jan 17 12:16:49.048348 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:16:49.048366 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:16:49.048385 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:16:49.048403 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:16:49.048422 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:16:49.048441 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:16:49.048459 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:16:49.048479 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:16:49.048500 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:16:49.048519 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:16:49.048537 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:16:49.048557 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:16:49.048574 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:16:49.048595 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:16:49.048613 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:16:49.048631 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:16:49.048650 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:16:49.048668 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:16:49.048686 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:16:49.048704 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:16:49.048722 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:16:49.048741 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:16:49.048762 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:16:49.048780 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:16:49.048798 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:16:49.048816 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:16:49.048834 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:16:49.048852 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:16:49.048870 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:16:49.048891 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:16:49.048909 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:16:49.048928 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:16:49.048973 systemd-journald[178]: Collecting audit messages is disabled. Jan 17 12:16:49.049016 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:16:49.049035 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:16:49.049055 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 12:16:49.049079 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:16:49.049102 systemd-journald[178]: Journal started Jan 17 12:16:49.049143 systemd-journald[178]: Runtime Journal (/run/log/journal/ec2d8c241099962d1764f58d8d2e040a) is 4.8M, max 38.6M, 33.7M free. Jan 17 12:16:48.991980 systemd-modules-load[179]: Inserted module 'overlay' Jan 17 12:16:49.058610 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:16:49.058683 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:16:49.055863 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:16:49.192653 kernel: Bridge firewalling registered Jan 17 12:16:49.065590 systemd-modules-load[179]: Inserted module 'br_netfilter' Jan 17 12:16:49.069467 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:16:49.200603 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:16:49.204516 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:16:49.209839 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:16:49.216761 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:16:49.228475 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:16:49.237686 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:16:49.240700 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:16:49.265578 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:16:49.276440 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:16:49.283413 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:16:49.294474 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:16:49.323695 dracut-cmdline[211]: dracut-dracut-053 Jan 17 12:16:49.328141 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:16:49.366109 systemd-resolved[214]: Positive Trust Anchors: Jan 17 12:16:49.366127 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:16:49.366205 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:16:49.380820 systemd-resolved[214]: Defaulting to hostname 'linux'. Jan 17 12:16:49.383630 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:16:49.386475 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:16:49.420225 kernel: SCSI subsystem initialized Jan 17 12:16:49.435319 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:16:49.462266 kernel: iscsi: registered transport (tcp) Jan 17 12:16:49.487440 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:16:49.487521 kernel: QLogic iSCSI HBA Driver Jan 17 12:16:49.536949 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:16:49.549328 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:16:49.593294 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:16:49.593371 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:16:49.593394 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:16:49.649210 kernel: raid6: avx512x4 gen() 14923 MB/s Jan 17 12:16:49.666212 kernel: raid6: avx512x2 gen() 10119 MB/s Jan 17 12:16:49.683233 kernel: raid6: avx512x1 gen() 11778 MB/s Jan 17 12:16:49.700206 kernel: raid6: avx2x4 gen() 14579 MB/s Jan 17 12:16:49.717211 kernel: raid6: avx2x2 gen() 13420 MB/s Jan 17 12:16:49.734423 kernel: raid6: avx2x1 gen() 10995 MB/s Jan 17 12:16:49.734498 kernel: raid6: using algorithm avx512x4 gen() 14923 MB/s Jan 17 12:16:49.752242 kernel: raid6: .... xor() 6389 MB/s, rmw enabled Jan 17 12:16:49.752330 kernel: raid6: using avx512x2 recovery algorithm Jan 17 12:16:49.802210 kernel: xor: automatically using best checksumming function avx Jan 17 12:16:50.082204 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:16:50.093333 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:16:50.100459 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:16:50.123309 systemd-udevd[397]: Using default interface naming scheme 'v255'. Jan 17 12:16:50.130378 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:16:50.137344 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:16:50.169633 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Jan 17 12:16:50.202895 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:16:50.209454 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:16:50.273568 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:16:50.284517 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:16:50.325047 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:16:50.331369 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:16:50.334227 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:16:50.337433 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:16:50.346709 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:16:50.394015 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:16:50.421549 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 17 12:16:50.436221 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 17 12:16:50.436536 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 17 12:16:50.436846 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:94:0f:d9:d5:d7 Jan 17 12:16:50.444575 (udev-worker)[459]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:16:50.450201 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 12:16:50.457749 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 17 12:16:50.458021 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 17 12:16:50.468615 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:16:50.470279 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:16:50.472553 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 17 12:16:50.474656 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:16:50.477897 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:16:50.479256 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:16:50.482849 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 12:16:50.482917 kernel: AES CTR mode by8 optimization enabled Jan 17 12:16:50.483733 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:16:50.490510 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:16:50.490625 kernel: GPT:9289727 != 16777215 Jan 17 12:16:50.490656 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:16:50.490673 kernel: GPT:9289727 != 16777215 Jan 17 12:16:50.491755 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:16:50.491836 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 12:16:50.494479 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:16:50.623206 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/nvme0n1p3 scanned by (udev-worker) (453) Jan 17 12:16:50.632411 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (445) Jan 17 12:16:50.700296 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:16:50.708475 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:16:50.730535 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 17 12:16:50.768427 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 17 12:16:50.772358 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:16:50.785591 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 17 12:16:50.800339 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 17 12:16:50.800453 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 17 12:16:50.814456 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:16:50.848778 disk-uuid[629]: Primary Header is updated. Jan 17 12:16:50.848778 disk-uuid[629]: Secondary Entries is updated. Jan 17 12:16:50.848778 disk-uuid[629]: Secondary Header is updated. Jan 17 12:16:50.855214 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 12:16:50.862200 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 12:16:50.871206 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 12:16:51.868366 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 12:16:51.869508 disk-uuid[630]: The operation has completed successfully. Jan 17 12:16:52.016040 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:16:52.016195 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:16:52.037382 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:16:52.041517 sh[973]: Success Jan 17 12:16:52.062222 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 12:16:52.168374 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:16:52.179297 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:16:52.182859 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:16:52.246266 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 12:16:52.246343 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:16:52.248663 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:16:52.248715 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:16:52.249609 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:16:52.379207 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 12:16:52.381611 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:16:52.383859 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:16:52.393602 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:16:52.399483 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:16:52.442066 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:16:52.442545 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:16:52.442571 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 12:16:52.449219 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 12:16:52.464286 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:16:52.464956 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:16:52.472298 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:16:52.484474 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:16:52.552489 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:16:52.561357 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:16:52.590560 systemd-networkd[1165]: lo: Link UP Jan 17 12:16:52.590573 systemd-networkd[1165]: lo: Gained carrier Jan 17 12:16:52.592468 systemd-networkd[1165]: Enumeration completed Jan 17 12:16:52.592899 systemd-networkd[1165]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:16:52.592904 systemd-networkd[1165]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:16:52.593667 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:16:52.596068 systemd[1]: Reached target network.target - Network. Jan 17 12:16:52.605798 systemd-networkd[1165]: eth0: Link UP Jan 17 12:16:52.605804 systemd-networkd[1165]: eth0: Gained carrier Jan 17 12:16:52.605823 systemd-networkd[1165]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:16:52.636319 systemd-networkd[1165]: eth0: DHCPv4 address 172.31.21.110/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 17 12:16:52.879912 ignition[1096]: Ignition 2.19.0 Jan 17 12:16:52.879926 ignition[1096]: Stage: fetch-offline Jan 17 12:16:52.880222 ignition[1096]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:16:52.880236 ignition[1096]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:16:52.883521 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:16:52.881894 ignition[1096]: Ignition finished successfully Jan 17 12:16:52.896586 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 12:16:52.918662 ignition[1174]: Ignition 2.19.0 Jan 17 12:16:52.918676 ignition[1174]: Stage: fetch Jan 17 12:16:52.919128 ignition[1174]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:16:52.919141 ignition[1174]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:16:52.919345 ignition[1174]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:16:52.974479 ignition[1174]: PUT result: OK Jan 17 12:16:53.001662 ignition[1174]: parsed url from cmdline: "" Jan 17 12:16:53.001672 ignition[1174]: no config URL provided Jan 17 12:16:53.001684 ignition[1174]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:16:53.001700 ignition[1174]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:16:53.001726 ignition[1174]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:16:53.004023 ignition[1174]: PUT result: OK Jan 17 12:16:53.004086 ignition[1174]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 17 12:16:53.015415 ignition[1174]: GET result: OK Jan 17 12:16:53.015470 ignition[1174]: parsing config with SHA512: fa6f9f20b4cc944307942714c453f644d7911a9f50b19785fceb22f268c3781ef1f879b72d4f38abff167565261d2d9768971dce7be9d95abe6eefd358ea3ae8 Jan 17 12:16:53.023154 unknown[1174]: fetched base config from "system" Jan 17 12:16:53.023566 ignition[1174]: fetch: fetch complete Jan 17 12:16:53.023168 unknown[1174]: fetched base config from "system" Jan 17 12:16:53.023571 ignition[1174]: fetch: fetch passed Jan 17 12:16:53.023177 unknown[1174]: fetched user config from "aws" Jan 17 12:16:53.023614 ignition[1174]: Ignition finished successfully Jan 17 12:16:53.029043 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 12:16:53.041397 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:16:53.065410 ignition[1181]: Ignition 2.19.0 Jan 17 12:16:53.065424 ignition[1181]: Stage: kargs Jan 17 12:16:53.068518 ignition[1181]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:16:53.068542 ignition[1181]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:16:53.068670 ignition[1181]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:16:53.072214 ignition[1181]: PUT result: OK Jan 17 12:16:53.078960 ignition[1181]: kargs: kargs passed Jan 17 12:16:53.079046 ignition[1181]: Ignition finished successfully Jan 17 12:16:53.082690 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:16:53.097458 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:16:53.145006 ignition[1187]: Ignition 2.19.0 Jan 17 12:16:53.145021 ignition[1187]: Stage: disks Jan 17 12:16:53.145526 ignition[1187]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:16:53.145540 ignition[1187]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:16:53.145648 ignition[1187]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:16:53.147117 ignition[1187]: PUT result: OK Jan 17 12:16:53.161755 ignition[1187]: disks: disks passed Jan 17 12:16:53.161844 ignition[1187]: Ignition finished successfully Jan 17 12:16:53.165313 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:16:53.165603 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:16:53.169739 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:16:53.173074 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:16:53.175786 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:16:53.180142 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:16:53.187334 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:16:53.225309 systemd-fsck[1195]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 12:16:53.229269 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:16:53.235292 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:16:53.372308 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 12:16:53.372125 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:16:53.378321 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:16:53.404380 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:16:53.429703 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:16:53.434827 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 12:16:53.434889 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:16:53.434926 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:16:53.453783 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:16:53.462766 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:16:53.470196 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1214) Jan 17 12:16:53.473046 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:16:53.473156 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:16:53.473188 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 12:16:53.495277 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 12:16:53.495341 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:16:53.748299 initrd-setup-root[1238]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:16:53.770513 initrd-setup-root[1245]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:16:53.780381 initrd-setup-root[1252]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:16:53.790328 initrd-setup-root[1259]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:16:54.068805 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:16:54.084452 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:16:54.091575 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:16:54.102775 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:16:54.104399 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:16:54.159675 ignition[1327]: INFO : Ignition 2.19.0 Jan 17 12:16:54.159675 ignition[1327]: INFO : Stage: mount Jan 17 12:16:54.163416 ignition[1327]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:16:54.163416 ignition[1327]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:16:54.163416 ignition[1327]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:16:54.163416 ignition[1327]: INFO : PUT result: OK Jan 17 12:16:54.163286 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:16:54.179309 ignition[1327]: INFO : mount: mount passed Jan 17 12:16:54.179309 ignition[1327]: INFO : Ignition finished successfully Jan 17 12:16:54.182396 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:16:54.189340 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:16:54.278915 systemd-networkd[1165]: eth0: Gained IPv6LL Jan 17 12:16:54.385718 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:16:54.407206 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1339) Jan 17 12:16:54.408202 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:16:54.408263 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:16:54.409512 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 12:16:54.414205 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 12:16:54.416663 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:16:54.443379 ignition[1356]: INFO : Ignition 2.19.0 Jan 17 12:16:54.443379 ignition[1356]: INFO : Stage: files Jan 17 12:16:54.445649 ignition[1356]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:16:54.445649 ignition[1356]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:16:54.445649 ignition[1356]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:16:54.451090 ignition[1356]: INFO : PUT result: OK Jan 17 12:16:54.453036 ignition[1356]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:16:54.455076 ignition[1356]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:16:54.455076 ignition[1356]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:16:54.463689 ignition[1356]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:16:54.465497 ignition[1356]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:16:54.467270 unknown[1356]: wrote ssh authorized keys file for user: core Jan 17 12:16:54.468958 ignition[1356]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:16:54.477154 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:16:54.479281 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:16:54.479281 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:16:54.479281 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:16:54.479281 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:16:54.479281 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:16:54.479281 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:16:54.479281 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 17 12:16:54.855222 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 17 12:16:55.401577 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:16:55.406987 ignition[1356]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:16:55.413175 ignition[1356]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:16:55.423911 ignition[1356]: INFO : files: files passed Jan 17 12:16:55.423911 ignition[1356]: INFO : Ignition finished successfully Jan 17 12:16:55.426159 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:16:55.431514 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:16:55.438488 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:16:55.443691 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:16:55.445076 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:16:55.467556 initrd-setup-root-after-ignition[1388]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:16:55.470117 initrd-setup-root-after-ignition[1384]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:16:55.470117 initrd-setup-root-after-ignition[1384]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:16:55.469653 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:16:55.473939 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:16:55.484491 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:16:55.528301 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:16:55.528435 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:16:55.532474 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:16:55.532589 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:16:55.532723 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:16:55.542097 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:16:55.559573 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:16:55.568109 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:16:55.584407 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:16:55.587143 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:16:55.590169 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:16:55.594523 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:16:55.594667 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:16:55.599531 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:16:55.600857 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:16:55.603316 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:16:55.607534 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:16:55.609153 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:16:55.611660 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:16:55.613719 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:16:55.617852 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:16:55.619018 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:16:55.622008 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:16:55.623852 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:16:55.623977 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:16:55.627431 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:16:55.629413 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:16:55.633706 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:16:55.633830 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:16:55.635247 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:16:55.635375 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:16:55.639536 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:16:55.639673 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:16:55.645128 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:16:55.645330 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:16:55.683485 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:16:55.688805 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:16:55.689037 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:16:55.711511 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:16:55.714123 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:16:55.716150 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:16:55.723439 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:16:55.724960 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:16:55.725069 ignition[1408]: INFO : Ignition 2.19.0 Jan 17 12:16:55.725069 ignition[1408]: INFO : Stage: umount Jan 17 12:16:55.729881 ignition[1408]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:16:55.729881 ignition[1408]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:16:55.729881 ignition[1408]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:16:55.729881 ignition[1408]: INFO : PUT result: OK Jan 17 12:16:55.738766 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:16:55.738997 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:16:55.782063 ignition[1408]: INFO : umount: umount passed Jan 17 12:16:55.782063 ignition[1408]: INFO : Ignition finished successfully Jan 17 12:16:55.780896 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:16:55.781021 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:16:55.793325 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:16:55.793736 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:16:55.799109 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:16:55.799207 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:16:55.802795 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 12:16:55.802882 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 12:16:55.805977 systemd[1]: Stopped target network.target - Network. Jan 17 12:16:55.808669 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:16:55.809872 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:16:55.815765 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:16:55.820331 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:16:55.830682 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:16:55.837485 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:16:55.842696 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:16:55.860997 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:16:55.861444 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:16:55.865487 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:16:55.865543 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:16:55.867419 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:16:55.867470 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:16:55.868635 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:16:55.868678 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:16:55.870029 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:16:55.871218 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:16:55.875315 systemd-networkd[1165]: eth0: DHCPv6 lease lost Jan 17 12:16:55.877522 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:16:55.878066 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:16:55.878192 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:16:55.885292 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:16:55.885428 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:16:55.894090 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:16:55.894158 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:16:55.915364 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:16:55.916469 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:16:55.916567 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:16:55.919375 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:16:55.919450 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:16:55.920674 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:16:55.920736 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:16:55.922862 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:16:55.922921 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:16:55.925138 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:16:55.978055 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:16:55.980704 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:16:55.998436 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:16:55.998562 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:16:56.004163 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:16:56.004342 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:16:56.008754 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:16:56.008853 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:16:56.011076 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:16:56.011135 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:16:56.013319 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:16:56.013380 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:16:56.018573 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:16:56.018638 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:16:56.021859 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:16:56.021922 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:16:56.027231 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:16:56.027298 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:16:56.040478 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:16:56.043511 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:16:56.043615 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:16:56.045343 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:16:56.045401 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:16:56.074804 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:16:56.074968 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:16:56.078166 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:16:56.092408 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:16:56.125585 systemd[1]: Switching root. Jan 17 12:16:56.179733 systemd-journald[178]: Journal stopped Jan 17 12:16:58.651098 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Jan 17 12:16:58.651228 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:16:58.651257 kernel: SELinux: policy capability open_perms=1 Jan 17 12:16:58.651275 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:16:58.651344 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:16:58.651363 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:16:58.651395 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:16:58.651412 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:16:58.651438 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:16:58.651455 kernel: audit: type=1403 audit(1737116216.698:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:16:58.651475 systemd[1]: Successfully loaded SELinux policy in 90.687ms. Jan 17 12:16:58.651505 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.897ms. Jan 17 12:16:58.651525 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:16:58.651544 systemd[1]: Detected virtualization amazon. Jan 17 12:16:58.651569 systemd[1]: Detected architecture x86-64. Jan 17 12:16:58.651587 systemd[1]: Detected first boot. Jan 17 12:16:58.651605 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:16:58.651627 zram_generator::config[1451]: No configuration found. Jan 17 12:16:58.651651 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:16:58.651671 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 12:16:58.651692 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 12:16:58.651712 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 12:16:58.656292 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:16:58.656321 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:16:58.656341 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:16:58.656362 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:16:58.656388 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:16:58.656413 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:16:58.656441 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:16:58.656468 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:16:58.656496 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:16:58.656523 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:16:58.656600 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:16:58.656627 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:16:58.656651 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:16:58.656677 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:16:58.656702 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 12:16:58.656726 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:16:58.656752 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 12:16:58.656781 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 12:16:58.656857 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 12:16:58.656881 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:16:58.656906 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:16:58.656931 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:16:58.656955 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:16:58.656980 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:16:58.657006 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:16:58.657036 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:16:58.657061 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:16:58.657085 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:16:58.657109 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:16:58.657134 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:16:58.657159 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:16:58.658333 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:16:58.658415 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:16:58.658438 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:16:58.658464 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:16:58.658511 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:16:58.658531 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:16:58.658552 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:16:58.658595 systemd[1]: Reached target machines.target - Containers. Jan 17 12:16:58.658614 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:16:58.658632 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:16:58.659505 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:16:58.659532 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:16:58.659557 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:16:58.659576 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:16:58.659596 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:16:58.659615 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:16:58.659640 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:16:58.659694 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:16:58.659713 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 12:16:58.659731 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 12:16:58.659779 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 12:16:58.659799 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 12:16:58.659817 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:16:58.659861 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:16:58.659880 kernel: loop: module loaded Jan 17 12:16:58.659900 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:16:58.659956 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:16:58.660060 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:16:58.660084 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 12:16:58.660133 systemd[1]: Stopped verity-setup.service. Jan 17 12:16:58.660150 kernel: fuse: init (API version 7.39) Jan 17 12:16:58.660169 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:16:58.660201 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:16:58.660458 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:16:58.660480 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:16:58.660498 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:16:58.660517 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:16:58.660541 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:16:58.660608 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:16:58.660628 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:16:58.660648 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:16:58.661286 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:16:58.661330 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:16:58.661355 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:16:58.661378 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:16:58.661403 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:16:58.661427 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:16:58.661450 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:16:58.661475 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:16:58.661504 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:16:58.661528 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:16:58.661553 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:16:58.661578 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:16:58.661603 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:16:58.661627 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:16:58.661758 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:16:58.661791 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:16:58.661819 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:16:58.661845 kernel: ACPI: bus type drm_connector registered Jan 17 12:16:58.661870 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:16:58.661895 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:16:58.661964 systemd-journald[1530]: Collecting audit messages is disabled. Jan 17 12:16:58.662011 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:16:58.662037 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:16:58.662067 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:16:58.662093 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:16:58.662119 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:16:58.662144 systemd-journald[1530]: Journal started Jan 17 12:16:58.666680 systemd-journald[1530]: Runtime Journal (/run/log/journal/ec2d8c241099962d1764f58d8d2e040a) is 4.8M, max 38.6M, 33.7M free. Jan 17 12:16:58.675374 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:16:57.969850 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:16:58.012318 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 17 12:16:58.012729 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 12:16:58.689547 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:16:58.695889 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:16:58.707521 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:16:58.707611 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:16:58.719397 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:16:58.740289 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:16:58.750150 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:16:58.759388 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:16:58.755979 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:16:58.757864 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:16:58.766285 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:16:58.791495 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:16:58.799075 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:16:58.808628 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:16:58.819662 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:16:58.834421 kernel: loop0: detected capacity change from 0 to 61336 Jan 17 12:16:58.829463 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:16:58.877876 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:16:58.879563 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:16:58.881451 systemd-journald[1530]: Time spent on flushing to /var/log/journal/ec2d8c241099962d1764f58d8d2e040a is 56.679ms for 952 entries. Jan 17 12:16:58.881451 systemd-journald[1530]: System Journal (/var/log/journal/ec2d8c241099962d1764f58d8d2e040a) is 8.0M, max 195.6M, 187.6M free. Jan 17 12:16:58.946276 systemd-journald[1530]: Received client request to flush runtime journal. Jan 17 12:16:58.946342 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:16:58.894422 udevadm[1589]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 12:16:58.946356 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:16:58.955741 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:16:58.960562 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:16:58.977318 kernel: loop1: detected capacity change from 0 to 140768 Jan 17 12:16:59.028012 systemd-tmpfiles[1595]: ACLs are not supported, ignoring. Jan 17 12:16:59.028042 systemd-tmpfiles[1595]: ACLs are not supported, ignoring. Jan 17 12:16:59.038927 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:16:59.101215 kernel: loop2: detected capacity change from 0 to 211296 Jan 17 12:16:59.215870 kernel: loop3: detected capacity change from 0 to 142488 Jan 17 12:16:59.388871 kernel: loop4: detected capacity change from 0 to 61336 Jan 17 12:16:59.432087 kernel: loop5: detected capacity change from 0 to 140768 Jan 17 12:16:59.498436 kernel: loop6: detected capacity change from 0 to 211296 Jan 17 12:16:59.542210 kernel: loop7: detected capacity change from 0 to 142488 Jan 17 12:16:59.580752 (sd-merge)[1603]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 17 12:16:59.583235 (sd-merge)[1603]: Merged extensions into '/usr'. Jan 17 12:16:59.592149 systemd[1]: Reloading requested from client PID 1561 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:16:59.592372 systemd[1]: Reloading... Jan 17 12:16:59.786216 zram_generator::config[1632]: No configuration found. Jan 17 12:17:00.108466 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:17:00.223015 systemd[1]: Reloading finished in 629 ms. Jan 17 12:17:00.265092 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:17:00.278273 systemd[1]: Starting ensure-sysext.service... Jan 17 12:17:00.292373 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:17:00.296797 systemd[1]: Reloading requested from client PID 1677 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:17:00.296820 systemd[1]: Reloading... Jan 17 12:17:00.324670 systemd-tmpfiles[1678]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:17:00.325158 systemd-tmpfiles[1678]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:17:00.326069 systemd-tmpfiles[1678]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:17:00.326438 systemd-tmpfiles[1678]: ACLs are not supported, ignoring. Jan 17 12:17:00.326535 systemd-tmpfiles[1678]: ACLs are not supported, ignoring. Jan 17 12:17:00.337206 systemd-tmpfiles[1678]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:17:00.337222 systemd-tmpfiles[1678]: Skipping /boot Jan 17 12:17:00.364101 systemd-tmpfiles[1678]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:17:00.364119 systemd-tmpfiles[1678]: Skipping /boot Jan 17 12:17:00.473205 zram_generator::config[1705]: No configuration found. Jan 17 12:17:00.699760 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:17:00.743536 ldconfig[1557]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:17:00.798050 systemd[1]: Reloading finished in 500 ms. Jan 17 12:17:00.821550 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:17:00.823429 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:17:00.832089 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:17:00.866506 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:17:00.873427 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:17:00.886435 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:17:00.902743 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:17:00.908481 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:17:00.921542 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:17:00.944976 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:17:00.951238 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:00.951544 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:17:00.962598 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:17:00.967532 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:17:00.977549 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:17:00.979638 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:17:00.979846 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:01.022641 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:17:01.041554 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:01.041969 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:17:01.042294 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:17:01.042490 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:01.049816 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:01.050592 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:17:01.054427 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:17:01.056027 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:17:01.056401 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:17:01.057759 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:17:01.067692 systemd[1]: Finished ensure-sysext.service. Jan 17 12:17:01.070160 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:17:01.070407 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:17:01.072423 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:17:01.072612 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:17:01.077597 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:17:01.103432 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:17:01.103690 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:17:01.105954 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:17:01.118358 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:17:01.118572 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:17:01.122684 systemd-udevd[1768]: Using default interface naming scheme 'v255'. Jan 17 12:17:01.132586 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:17:01.133508 augenrules[1789]: No rules Jan 17 12:17:01.144488 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:17:01.144929 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:17:01.164956 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:17:01.168101 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:17:01.198543 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:17:01.200626 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:17:01.213687 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:17:01.227466 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:17:01.287923 systemd-resolved[1767]: Positive Trust Anchors: Jan 17 12:17:01.287953 systemd-resolved[1767]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:17:01.288002 systemd-resolved[1767]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:17:01.300832 systemd-resolved[1767]: Defaulting to hostname 'linux'. Jan 17 12:17:01.308881 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:17:01.311373 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:17:01.331104 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 12:17:01.386601 (udev-worker)[1806]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:17:01.387117 systemd-networkd[1808]: lo: Link UP Jan 17 12:17:01.387567 systemd-networkd[1808]: lo: Gained carrier Jan 17 12:17:01.391415 systemd-networkd[1808]: Enumeration completed Jan 17 12:17:01.391556 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:17:01.392514 systemd-networkd[1808]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:17:01.392611 systemd-networkd[1808]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:17:01.392848 systemd[1]: Reached target network.target - Network. Jan 17 12:17:01.403333 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:17:01.403539 systemd-networkd[1808]: eth0: Link UP Jan 17 12:17:01.403786 systemd-networkd[1808]: eth0: Gained carrier Jan 17 12:17:01.403812 systemd-networkd[1808]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:17:01.416347 systemd-networkd[1808]: eth0: DHCPv4 address 172.31.21.110/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 17 12:17:01.459649 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jan 17 12:17:01.477249 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 17 12:17:01.492413 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Jan 17 12:17:01.495688 kernel: ACPI: button: Power Button [PWRF] Jan 17 12:17:01.500029 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Jan 17 12:17:01.501901 kernel: ACPI: button: Sleep Button [SLPF] Jan 17 12:17:01.510811 systemd-networkd[1808]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:17:01.626228 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 12:17:01.633437 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1805) Jan 17 12:17:01.644226 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:17:01.812504 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 17 12:17:01.815067 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:17:01.828987 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:17:02.013870 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:17:02.016120 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:17:02.062228 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:17:02.073316 lvm[1921]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:17:02.141471 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:17:02.143701 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:17:02.146971 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:17:02.148597 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:17:02.156523 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:17:02.158717 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:17:02.160915 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:17:02.166281 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:17:02.175564 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:17:02.175623 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:17:02.176908 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:17:02.182282 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:17:02.186166 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:17:02.209789 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:17:02.224477 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:17:02.228767 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:17:02.234685 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:17:02.236561 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:17:02.249387 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:17:02.249429 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:17:02.271360 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:17:02.284592 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 12:17:02.295483 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:17:02.296996 lvm[1929]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:17:02.320974 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:17:02.350314 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:17:02.355072 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:17:02.363144 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:17:02.389486 systemd[1]: Started ntpd.service - Network Time Service. Jan 17 12:17:02.417424 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 17 12:17:02.441425 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:17:02.445786 jq[1933]: false Jan 17 12:17:02.445764 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:17:02.475413 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:17:02.477792 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 12:17:02.478892 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:17:02.511561 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:17:02.521441 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:17:02.527916 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:17:02.554025 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:17:02.558253 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:17:02.585724 jq[1943]: true Jan 17 12:17:02.656156 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:17:02.658637 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:17:02.666380 systemd-networkd[1808]: eth0: Gained IPv6LL Jan 17 12:17:02.664840 dbus-daemon[1932]: [system] SELinux support is enabled Jan 17 12:17:02.673917 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:17:02.708071 dbus-daemon[1932]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1808 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 17 12:17:02.723299 (ntainerd)[1955]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:17:02.761707 extend-filesystems[1934]: Found loop4 Jan 17 12:17:02.761707 extend-filesystems[1934]: Found loop5 Jan 17 12:17:02.761707 extend-filesystems[1934]: Found loop6 Jan 17 12:17:02.761707 extend-filesystems[1934]: Found loop7 Jan 17 12:17:02.761707 extend-filesystems[1934]: Found nvme0n1 Jan 17 12:17:02.761707 extend-filesystems[1934]: Found nvme0n1p1 Jan 17 12:17:02.761707 extend-filesystems[1934]: Found nvme0n1p2 Jan 17 12:17:02.761707 extend-filesystems[1934]: Found nvme0n1p3 Jan 17 12:17:02.761707 extend-filesystems[1934]: Found usr Jan 17 12:17:02.761707 extend-filesystems[1934]: Found nvme0n1p4 Jan 17 12:17:02.751688 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:17:02.835710 jq[1950]: true Jan 17 12:17:02.782691 dbus-daemon[1932]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 17 12:17:02.836085 update_engine[1942]: I20250117 12:17:02.795995 1942 main.cc:92] Flatcar Update Engine starting Jan 17 12:17:02.836442 extend-filesystems[1934]: Found nvme0n1p6 Jan 17 12:17:02.836442 extend-filesystems[1934]: Found nvme0n1p7 Jan 17 12:17:02.836442 extend-filesystems[1934]: Found nvme0n1p9 Jan 17 12:17:02.836442 extend-filesystems[1934]: Checking size of /dev/nvme0n1p9 Jan 17 12:17:02.758931 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:17:02.758974 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:17:02.762589 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:17:02.762620 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:17:02.776854 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:17:02.781686 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 17 12:17:02.797452 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:17:02.831853 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 17 12:17:02.848517 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:17:02.859395 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:17:02.894606 ntpd[1936]: 17 Jan 12:17:02 ntpd[1936]: ntpd 4.2.8p17@1.4004-o Fri Jan 17 10:03:35 UTC 2025 (1): Starting Jan 17 12:17:02.894606 ntpd[1936]: 17 Jan 12:17:02 ntpd[1936]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 12:17:02.894606 ntpd[1936]: 17 Jan 12:17:02 ntpd[1936]: ---------------------------------------------------- Jan 17 12:17:02.894606 ntpd[1936]: 17 Jan 12:17:02 ntpd[1936]: ntp-4 is maintained by Network Time Foundation, Jan 17 12:17:02.894606 ntpd[1936]: 17 Jan 12:17:02 ntpd[1936]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 12:17:02.894606 ntpd[1936]: 17 Jan 12:17:02 ntpd[1936]: corporation. Support and training for ntp-4 are Jan 17 12:17:02.894606 ntpd[1936]: 17 Jan 12:17:02 ntpd[1936]: available at https://www.nwtime.org/support Jan 17 12:17:02.894606 ntpd[1936]: 17 Jan 12:17:02 ntpd[1936]: ---------------------------------------------------- Jan 17 12:17:02.872371 ntpd[1936]: ntpd 4.2.8p17@1.4004-o Fri Jan 17 10:03:35 UTC 2025 (1): Starting Jan 17 12:17:02.895151 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 17 12:17:02.904486 ntpd[1936]: 17 Jan 12:17:02 ntpd[1936]: proto: precision = 0.067 usec (-24) Jan 17 12:17:02.872402 ntpd[1936]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 12:17:02.872415 ntpd[1936]: ---------------------------------------------------- Jan 17 12:17:02.872425 ntpd[1936]: ntp-4 is maintained by Network Time Foundation, Jan 17 12:17:02.872434 ntpd[1936]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 12:17:02.872444 ntpd[1936]: corporation. Support and training for ntp-4 are Jan 17 12:17:02.872454 ntpd[1936]: available at https://www.nwtime.org/support Jan 17 12:17:02.872463 ntpd[1936]: ---------------------------------------------------- Jan 17 12:17:02.900450 ntpd[1936]: proto: precision = 0.067 usec (-24) Jan 17 12:17:02.907523 ntpd[1936]: basedate set to 2025-01-05 Jan 17 12:17:02.908471 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:17:02.912067 update_engine[1942]: I20250117 12:17:02.906601 1942 update_check_scheduler.cc:74] Next update check in 10m22s Jan 17 12:17:02.912116 ntpd[1936]: 17 Jan 12:17:02 ntpd[1936]: basedate set to 2025-01-05 Jan 17 12:17:02.912116 ntpd[1936]: 17 Jan 12:17:02 ntpd[1936]: gps base set to 2025-01-05 (week 2348) Jan 17 12:17:02.907546 ntpd[1936]: gps base set to 2025-01-05 (week 2348) Jan 17 12:17:02.918442 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:17:02.927629 ntpd[1936]: 17 Jan 12:17:02 ntpd[1936]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 12:17:02.927629 ntpd[1936]: 17 Jan 12:17:02 ntpd[1936]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 12:17:02.927629 ntpd[1936]: 17 Jan 12:17:02 ntpd[1936]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 12:17:02.927629 ntpd[1936]: 17 Jan 12:17:02 ntpd[1936]: Listen normally on 3 eth0 172.31.21.110:123 Jan 17 12:17:02.927629 ntpd[1936]: 17 Jan 12:17:02 ntpd[1936]: Listen normally on 4 lo [::1]:123 Jan 17 12:17:02.927629 ntpd[1936]: 17 Jan 12:17:02 ntpd[1936]: Listen normally on 5 eth0 [fe80::494:fff:fed9:d5d7%2]:123 Jan 17 12:17:02.927629 ntpd[1936]: 17 Jan 12:17:02 ntpd[1936]: Listening on routing socket on fd #22 for interface updates Jan 17 12:17:02.923271 ntpd[1936]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 12:17:02.923328 ntpd[1936]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 12:17:02.923511 ntpd[1936]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 12:17:02.923547 ntpd[1936]: Listen normally on 3 eth0 172.31.21.110:123 Jan 17 12:17:02.923587 ntpd[1936]: Listen normally on 4 lo [::1]:123 Jan 17 12:17:02.923629 ntpd[1936]: Listen normally on 5 eth0 [fe80::494:fff:fed9:d5d7%2]:123 Jan 17 12:17:02.923664 ntpd[1936]: Listening on routing socket on fd #22 for interface updates Jan 17 12:17:02.937927 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:17:02.938226 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:17:02.962315 extend-filesystems[1934]: Resized partition /dev/nvme0n1p9 Jan 17 12:17:02.981699 extend-filesystems[1989]: resize2fs 1.47.1 (20-May-2024) Jan 17 12:17:02.985416 ntpd[1936]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:17:02.985461 ntpd[1936]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:17:02.985624 ntpd[1936]: 17 Jan 12:17:02 ntpd[1936]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:17:02.985624 ntpd[1936]: 17 Jan 12:17:02 ntpd[1936]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:17:03.000788 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 17 12:17:03.142041 coreos-metadata[1931]: Jan 17 12:17:03.141 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 17 12:17:03.148918 coreos-metadata[1931]: Jan 17 12:17:03.146 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 17 12:17:03.148918 coreos-metadata[1931]: Jan 17 12:17:03.147 INFO Fetch successful Jan 17 12:17:03.148918 coreos-metadata[1931]: Jan 17 12:17:03.147 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 17 12:17:03.156081 coreos-metadata[1931]: Jan 17 12:17:03.150 INFO Fetch successful Jan 17 12:17:03.156081 coreos-metadata[1931]: Jan 17 12:17:03.150 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 17 12:17:03.156081 coreos-metadata[1931]: Jan 17 12:17:03.151 INFO Fetch successful Jan 17 12:17:03.156081 coreos-metadata[1931]: Jan 17 12:17:03.151 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 17 12:17:03.156081 coreos-metadata[1931]: Jan 17 12:17:03.151 INFO Fetch successful Jan 17 12:17:03.156081 coreos-metadata[1931]: Jan 17 12:17:03.151 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 17 12:17:03.156081 coreos-metadata[1931]: Jan 17 12:17:03.152 INFO Fetch failed with 404: resource not found Jan 17 12:17:03.156081 coreos-metadata[1931]: Jan 17 12:17:03.152 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 17 12:17:03.156081 coreos-metadata[1931]: Jan 17 12:17:03.153 INFO Fetch successful Jan 17 12:17:03.156081 coreos-metadata[1931]: Jan 17 12:17:03.153 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 17 12:17:03.156081 coreos-metadata[1931]: Jan 17 12:17:03.154 INFO Fetch successful Jan 17 12:17:03.156081 coreos-metadata[1931]: Jan 17 12:17:03.154 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 17 12:17:03.156081 coreos-metadata[1931]: Jan 17 12:17:03.155 INFO Fetch successful Jan 17 12:17:03.156081 coreos-metadata[1931]: Jan 17 12:17:03.155 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 17 12:17:03.167329 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 17 12:17:03.190752 coreos-metadata[1931]: Jan 17 12:17:03.158 INFO Fetch successful Jan 17 12:17:03.190752 coreos-metadata[1931]: Jan 17 12:17:03.158 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 17 12:17:03.190752 coreos-metadata[1931]: Jan 17 12:17:03.159 INFO Fetch successful Jan 17 12:17:03.193564 extend-filesystems[1989]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 17 12:17:03.193564 extend-filesystems[1989]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 17 12:17:03.193564 extend-filesystems[1989]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 17 12:17:03.204689 extend-filesystems[1934]: Resized filesystem in /dev/nvme0n1p9 Jan 17 12:17:03.199338 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:17:03.199728 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:17:03.245112 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:17:03.291473 bash[2024]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:17:03.299253 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:17:03.299805 systemd-logind[1940]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 12:17:03.299833 systemd-logind[1940]: Watching system buttons on /dev/input/event3 (Sleep Button) Jan 17 12:17:03.299857 systemd-logind[1940]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 12:17:03.311363 systemd-logind[1940]: New seat seat0. Jan 17 12:17:03.327966 systemd[1]: Starting sshkeys.service... Jan 17 12:17:03.329349 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:17:03.366069 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 12:17:03.368744 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:17:03.459907 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 12:17:03.474486 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 12:17:03.496068 amazon-ssm-agent[1973]: Initializing new seelog logger Jan 17 12:17:03.496068 amazon-ssm-agent[1973]: New Seelog Logger Creation Complete Jan 17 12:17:03.496068 amazon-ssm-agent[1973]: 2025/01/17 12:17:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:17:03.496068 amazon-ssm-agent[1973]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:17:03.496068 amazon-ssm-agent[1973]: 2025/01/17 12:17:03 processing appconfig overrides Jan 17 12:17:03.498102 amazon-ssm-agent[1973]: 2025-01-17 12:17:03 INFO Proxy environment variables: Jan 17 12:17:03.508242 amazon-ssm-agent[1973]: 2025/01/17 12:17:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:17:03.508242 amazon-ssm-agent[1973]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:17:03.508242 amazon-ssm-agent[1973]: 2025/01/17 12:17:03 processing appconfig overrides Jan 17 12:17:03.508242 amazon-ssm-agent[1973]: 2025/01/17 12:17:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:17:03.508242 amazon-ssm-agent[1973]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:17:03.508242 amazon-ssm-agent[1973]: 2025/01/17 12:17:03 processing appconfig overrides Jan 17 12:17:03.520059 amazon-ssm-agent[1973]: 2025/01/17 12:17:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:17:03.520059 amazon-ssm-agent[1973]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:17:03.520281 amazon-ssm-agent[1973]: 2025/01/17 12:17:03 processing appconfig overrides Jan 17 12:17:03.566210 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1814) Jan 17 12:17:03.614318 amazon-ssm-agent[1973]: 2025-01-17 12:17:03 INFO https_proxy: Jan 17 12:17:03.715227 amazon-ssm-agent[1973]: 2025-01-17 12:17:03 INFO http_proxy: Jan 17 12:17:03.817299 amazon-ssm-agent[1973]: 2025-01-17 12:17:03 INFO no_proxy: Jan 17 12:17:03.830061 dbus-daemon[1932]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 17 12:17:03.832049 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 17 12:17:03.841901 dbus-daemon[1932]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1977 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 17 12:17:03.858163 systemd[1]: Starting polkit.service - Authorization Manager... Jan 17 12:17:03.862891 coreos-metadata[2036]: Jan 17 12:17:03.862 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 17 12:17:03.865149 coreos-metadata[2036]: Jan 17 12:17:03.865 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 17 12:17:03.865836 coreos-metadata[2036]: Jan 17 12:17:03.865 INFO Fetch successful Jan 17 12:17:03.865836 coreos-metadata[2036]: Jan 17 12:17:03.865 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 17 12:17:03.871825 coreos-metadata[2036]: Jan 17 12:17:03.869 INFO Fetch successful Jan 17 12:17:03.875931 unknown[2036]: wrote ssh authorized keys file for user: core Jan 17 12:17:03.926781 polkitd[2077]: Started polkitd version 121 Jan 17 12:17:03.934859 amazon-ssm-agent[1973]: 2025-01-17 12:17:03 INFO Checking if agent identity type OnPrem can be assumed Jan 17 12:17:03.969631 update-ssh-keys[2089]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:17:03.970058 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 12:17:03.988801 systemd[1]: Finished sshkeys.service. Jan 17 12:17:03.996763 polkitd[2077]: Loading rules from directory /etc/polkit-1/rules.d Jan 17 12:17:03.996865 polkitd[2077]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 17 12:17:03.998829 polkitd[2077]: Finished loading, compiling and executing 2 rules Jan 17 12:17:04.002818 dbus-daemon[1932]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 17 12:17:04.004843 systemd[1]: Started polkit.service - Authorization Manager. Jan 17 12:17:04.008920 polkitd[2077]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 17 12:17:04.031833 locksmithd[1982]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:17:04.045664 amazon-ssm-agent[1973]: 2025-01-17 12:17:03 INFO Checking if agent identity type EC2 can be assumed Jan 17 12:17:04.068234 systemd-hostnamed[1977]: Hostname set to (transient) Jan 17 12:17:04.078646 systemd-resolved[1767]: System hostname changed to 'ip-172-31-21-110'. Jan 17 12:17:04.148261 amazon-ssm-agent[1973]: 2025-01-17 12:17:04 INFO Agent will take identity from EC2 Jan 17 12:17:04.246962 amazon-ssm-agent[1973]: 2025-01-17 12:17:04 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 12:17:04.306985 containerd[1955]: time="2025-01-17T12:17:04.306892456Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:17:04.357491 amazon-ssm-agent[1973]: 2025-01-17 12:17:04 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 12:17:04.398369 sshd_keygen[1976]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:17:04.455207 amazon-ssm-agent[1973]: 2025-01-17 12:17:04 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 12:17:04.459395 containerd[1955]: time="2025-01-17T12:17:04.459233962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:17:04.462396 containerd[1955]: time="2025-01-17T12:17:04.462342310Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:17:04.462396 containerd[1955]: time="2025-01-17T12:17:04.462395710Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:17:04.462547 containerd[1955]: time="2025-01-17T12:17:04.462419900Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:17:04.462699 containerd[1955]: time="2025-01-17T12:17:04.462675359Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:17:04.462751 containerd[1955]: time="2025-01-17T12:17:04.462709856Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:17:04.462814 containerd[1955]: time="2025-01-17T12:17:04.462791427Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:17:04.462857 containerd[1955]: time="2025-01-17T12:17:04.462819306Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:17:04.463092 containerd[1955]: time="2025-01-17T12:17:04.463050620Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:17:04.463092 containerd[1955]: time="2025-01-17T12:17:04.463079172Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:17:04.463224 containerd[1955]: time="2025-01-17T12:17:04.463101071Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:17:04.463224 containerd[1955]: time="2025-01-17T12:17:04.463115204Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:17:04.464543 containerd[1955]: time="2025-01-17T12:17:04.464266672Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:17:04.466635 containerd[1955]: time="2025-01-17T12:17:04.465121199Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:17:04.467457 containerd[1955]: time="2025-01-17T12:17:04.467424834Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:17:04.467457 containerd[1955]: time="2025-01-17T12:17:04.467450077Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:17:04.467680 containerd[1955]: time="2025-01-17T12:17:04.467659057Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:17:04.467751 containerd[1955]: time="2025-01-17T12:17:04.467736313Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:17:04.479208 containerd[1955]: time="2025-01-17T12:17:04.476925208Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:17:04.479208 containerd[1955]: time="2025-01-17T12:17:04.477002798Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:17:04.479208 containerd[1955]: time="2025-01-17T12:17:04.477046422Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:17:04.479208 containerd[1955]: time="2025-01-17T12:17:04.477071018Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:17:04.479208 containerd[1955]: time="2025-01-17T12:17:04.477117353Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:17:04.479208 containerd[1955]: time="2025-01-17T12:17:04.477334405Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:17:04.479208 containerd[1955]: time="2025-01-17T12:17:04.477665654Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:17:04.479208 containerd[1955]: time="2025-01-17T12:17:04.477805472Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:17:04.479208 containerd[1955]: time="2025-01-17T12:17:04.477827462Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:17:04.479208 containerd[1955]: time="2025-01-17T12:17:04.477849441Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:17:04.479208 containerd[1955]: time="2025-01-17T12:17:04.477878782Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:17:04.479208 containerd[1955]: time="2025-01-17T12:17:04.478039782Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:17:04.479208 containerd[1955]: time="2025-01-17T12:17:04.478058455Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:17:04.479208 containerd[1955]: time="2025-01-17T12:17:04.478078449Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:17:04.481023 containerd[1955]: time="2025-01-17T12:17:04.478099110Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:17:04.481023 containerd[1955]: time="2025-01-17T12:17:04.478116467Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:17:04.481023 containerd[1955]: time="2025-01-17T12:17:04.478132355Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:17:04.481023 containerd[1955]: time="2025-01-17T12:17:04.478151355Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:17:04.481023 containerd[1955]: time="2025-01-17T12:17:04.478198387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:17:04.481023 containerd[1955]: time="2025-01-17T12:17:04.478218571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:17:04.481023 containerd[1955]: time="2025-01-17T12:17:04.478235534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:17:04.481023 containerd[1955]: time="2025-01-17T12:17:04.478254938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:17:04.481023 containerd[1955]: time="2025-01-17T12:17:04.478272404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:17:04.481023 containerd[1955]: time="2025-01-17T12:17:04.478291509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:17:04.481023 containerd[1955]: time="2025-01-17T12:17:04.478322774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:17:04.481023 containerd[1955]: time="2025-01-17T12:17:04.478343124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:17:04.481023 containerd[1955]: time="2025-01-17T12:17:04.478362668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:17:04.481023 containerd[1955]: time="2025-01-17T12:17:04.478382262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:17:04.481615 containerd[1955]: time="2025-01-17T12:17:04.478400857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:17:04.481615 containerd[1955]: time="2025-01-17T12:17:04.478418145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:17:04.481615 containerd[1955]: time="2025-01-17T12:17:04.478435041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:17:04.481615 containerd[1955]: time="2025-01-17T12:17:04.478459069Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:17:04.481615 containerd[1955]: time="2025-01-17T12:17:04.478490834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:17:04.481615 containerd[1955]: time="2025-01-17T12:17:04.478509485Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:17:04.481615 containerd[1955]: time="2025-01-17T12:17:04.478526390Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:17:04.481615 containerd[1955]: time="2025-01-17T12:17:04.478591764Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:17:04.481615 containerd[1955]: time="2025-01-17T12:17:04.478617955Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:17:04.481615 containerd[1955]: time="2025-01-17T12:17:04.478635095Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:17:04.481615 containerd[1955]: time="2025-01-17T12:17:04.478653396Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:17:04.481615 containerd[1955]: time="2025-01-17T12:17:04.478668287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:17:04.481615 containerd[1955]: time="2025-01-17T12:17:04.478685877Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:17:04.481615 containerd[1955]: time="2025-01-17T12:17:04.478701787Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:17:04.483172 containerd[1955]: time="2025-01-17T12:17:04.478716489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:17:04.483457 containerd[1955]: time="2025-01-17T12:17:04.479132528Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:17:04.484996 containerd[1955]: time="2025-01-17T12:17:04.483731172Z" level=info msg="Connect containerd service" Jan 17 12:17:04.484996 containerd[1955]: time="2025-01-17T12:17:04.483824559Z" level=info msg="using legacy CRI server" Jan 17 12:17:04.484996 containerd[1955]: time="2025-01-17T12:17:04.483838239Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:17:04.484996 containerd[1955]: time="2025-01-17T12:17:04.483987042Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:17:04.486534 containerd[1955]: time="2025-01-17T12:17:04.486257603Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:17:04.487577 containerd[1955]: time="2025-01-17T12:17:04.487529493Z" level=info msg="Start subscribing containerd event" Jan 17 12:17:04.487917 containerd[1955]: time="2025-01-17T12:17:04.487896467Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:17:04.488086 containerd[1955]: time="2025-01-17T12:17:04.488061027Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:17:04.490422 containerd[1955]: time="2025-01-17T12:17:04.489305590Z" level=info msg="Start recovering state" Jan 17 12:17:04.490422 containerd[1955]: time="2025-01-17T12:17:04.489414853Z" level=info msg="Start event monitor" Jan 17 12:17:04.490422 containerd[1955]: time="2025-01-17T12:17:04.489437283Z" level=info msg="Start snapshots syncer" Jan 17 12:17:04.490422 containerd[1955]: time="2025-01-17T12:17:04.489452048Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:17:04.490422 containerd[1955]: time="2025-01-17T12:17:04.489552343Z" level=info msg="Start streaming server" Jan 17 12:17:04.490422 containerd[1955]: time="2025-01-17T12:17:04.489633871Z" level=info msg="containerd successfully booted in 0.184152s" Jan 17 12:17:04.490319 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:17:04.504694 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:17:04.522788 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:17:04.531610 systemd[1]: Started sshd@0-172.31.21.110:22-139.178.89.65:33658.service - OpenSSH per-connection server daemon (139.178.89.65:33658). Jan 17 12:17:04.552616 amazon-ssm-agent[1973]: 2025-01-17 12:17:04 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 17 12:17:04.552575 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:17:04.552855 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:17:04.568614 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:17:04.600698 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:17:04.608821 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:17:04.617891 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 12:17:04.619699 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:17:04.653998 amazon-ssm-agent[1973]: 2025-01-17 12:17:04 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 17 12:17:04.685811 amazon-ssm-agent[1973]: 2025-01-17 12:17:04 INFO [amazon-ssm-agent] Starting Core Agent Jan 17 12:17:04.686003 amazon-ssm-agent[1973]: 2025-01-17 12:17:04 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 17 12:17:04.686069 amazon-ssm-agent[1973]: 2025-01-17 12:17:04 INFO [Registrar] Starting registrar module Jan 17 12:17:04.686145 amazon-ssm-agent[1973]: 2025-01-17 12:17:04 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 17 12:17:04.686294 amazon-ssm-agent[1973]: 2025-01-17 12:17:04 INFO [EC2Identity] EC2 registration was successful. Jan 17 12:17:04.686294 amazon-ssm-agent[1973]: 2025-01-17 12:17:04 INFO [CredentialRefresher] credentialRefresher has started Jan 17 12:17:04.686294 amazon-ssm-agent[1973]: 2025-01-17 12:17:04 INFO [CredentialRefresher] Starting credentials refresher loop Jan 17 12:17:04.686294 amazon-ssm-agent[1973]: 2025-01-17 12:17:04 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 17 12:17:04.754463 amazon-ssm-agent[1973]: 2025-01-17 12:17:04 INFO [CredentialRefresher] Next credential rotation will be in 32.38331359575 minutes Jan 17 12:17:04.756233 sshd[2161]: Accepted publickey for core from 139.178.89.65 port 33658 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:17:04.760692 sshd[2161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:04.788246 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:17:04.806801 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:17:04.819462 systemd-logind[1940]: New session 1 of user core. Jan 17 12:17:04.842954 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:17:04.853750 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:17:04.867844 (systemd)[2173]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:17:05.082537 systemd[2173]: Queued start job for default target default.target. Jan 17 12:17:05.094046 systemd[2173]: Created slice app.slice - User Application Slice. Jan 17 12:17:05.094087 systemd[2173]: Reached target paths.target - Paths. Jan 17 12:17:05.094108 systemd[2173]: Reached target timers.target - Timers. Jan 17 12:17:05.097290 systemd[2173]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:17:05.110683 systemd[2173]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:17:05.111869 systemd[2173]: Reached target sockets.target - Sockets. Jan 17 12:17:05.111906 systemd[2173]: Reached target basic.target - Basic System. Jan 17 12:17:05.111972 systemd[2173]: Reached target default.target - Main User Target. Jan 17 12:17:05.112012 systemd[2173]: Startup finished in 236ms. Jan 17 12:17:05.112147 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:17:05.120409 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:17:05.289937 systemd[1]: Started sshd@1-172.31.21.110:22-139.178.89.65:33662.service - OpenSSH per-connection server daemon (139.178.89.65:33662). Jan 17 12:17:05.507258 sshd[2184]: Accepted publickey for core from 139.178.89.65 port 33662 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:17:05.509000 sshd[2184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:05.516458 systemd-logind[1940]: New session 2 of user core. Jan 17 12:17:05.525438 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:17:05.658361 sshd[2184]: pam_unix(sshd:session): session closed for user core Jan 17 12:17:05.662384 systemd[1]: sshd@1-172.31.21.110:22-139.178.89.65:33662.service: Deactivated successfully. Jan 17 12:17:05.666429 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 12:17:05.671014 systemd-logind[1940]: Session 2 logged out. Waiting for processes to exit. Jan 17 12:17:05.673911 systemd-logind[1940]: Removed session 2. Jan 17 12:17:05.690636 systemd[1]: Started sshd@2-172.31.21.110:22-139.178.89.65:33664.service - OpenSSH per-connection server daemon (139.178.89.65:33664). Jan 17 12:17:05.731365 amazon-ssm-agent[1973]: 2025-01-17 12:17:05 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 17 12:17:05.836341 amazon-ssm-agent[1973]: 2025-01-17 12:17:05 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2194) started Jan 17 12:17:05.910771 sshd[2191]: Accepted publickey for core from 139.178.89.65 port 33664 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:17:05.915780 sshd[2191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:05.926503 systemd-logind[1940]: New session 3 of user core. Jan 17 12:17:05.933648 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:17:05.935571 amazon-ssm-agent[1973]: 2025-01-17 12:17:05 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 17 12:17:06.064792 sshd[2191]: pam_unix(sshd:session): session closed for user core Jan 17 12:17:06.075191 systemd[1]: sshd@2-172.31.21.110:22-139.178.89.65:33664.service: Deactivated successfully. Jan 17 12:17:06.077092 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 12:17:06.080363 systemd-logind[1940]: Session 3 logged out. Waiting for processes to exit. Jan 17 12:17:06.081611 systemd-logind[1940]: Removed session 3. Jan 17 12:17:06.430922 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:17:06.434213 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:17:06.442292 systemd[1]: Startup finished in 718ms (kernel) + 7.947s (initrd) + 9.832s (userspace) = 18.499s. Jan 17 12:17:06.600763 (kubelet)[2213]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:17:08.216937 kubelet[2213]: E0117 12:17:08.216852 2213 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:17:08.219727 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:17:08.220196 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:17:08.220687 systemd[1]: kubelet.service: Consumed 1.086s CPU time. Jan 17 12:17:10.206264 systemd-resolved[1767]: Clock change detected. Flushing caches. Jan 17 12:17:16.436543 systemd[1]: Started sshd@3-172.31.21.110:22-139.178.89.65:44806.service - OpenSSH per-connection server daemon (139.178.89.65:44806). Jan 17 12:17:16.614968 sshd[2227]: Accepted publickey for core from 139.178.89.65 port 44806 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:17:16.616756 sshd[2227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:16.631398 systemd-logind[1940]: New session 4 of user core. Jan 17 12:17:16.643635 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:17:16.781592 sshd[2227]: pam_unix(sshd:session): session closed for user core Jan 17 12:17:16.793853 systemd[1]: sshd@3-172.31.21.110:22-139.178.89.65:44806.service: Deactivated successfully. Jan 17 12:17:16.800607 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 12:17:16.809852 systemd-logind[1940]: Session 4 logged out. Waiting for processes to exit. Jan 17 12:17:16.841726 systemd[1]: Started sshd@4-172.31.21.110:22-139.178.89.65:44808.service - OpenSSH per-connection server daemon (139.178.89.65:44808). Jan 17 12:17:16.851051 systemd-logind[1940]: Removed session 4. Jan 17 12:17:17.030741 sshd[2234]: Accepted publickey for core from 139.178.89.65 port 44808 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:17:17.032432 sshd[2234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:17.040446 systemd-logind[1940]: New session 5 of user core. Jan 17 12:17:17.049419 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:17:17.163913 sshd[2234]: pam_unix(sshd:session): session closed for user core Jan 17 12:17:17.168603 systemd[1]: sshd@4-172.31.21.110:22-139.178.89.65:44808.service: Deactivated successfully. Jan 17 12:17:17.171029 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:17:17.172070 systemd-logind[1940]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:17:17.173698 systemd-logind[1940]: Removed session 5. Jan 17 12:17:17.203578 systemd[1]: Started sshd@5-172.31.21.110:22-139.178.89.65:44812.service - OpenSSH per-connection server daemon (139.178.89.65:44812). Jan 17 12:17:17.359020 sshd[2241]: Accepted publickey for core from 139.178.89.65 port 44812 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:17:17.361064 sshd[2241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:17.366536 systemd-logind[1940]: New session 6 of user core. Jan 17 12:17:17.375464 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:17:17.503759 sshd[2241]: pam_unix(sshd:session): session closed for user core Jan 17 12:17:17.513210 systemd[1]: sshd@5-172.31.21.110:22-139.178.89.65:44812.service: Deactivated successfully. Jan 17 12:17:17.524736 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:17:17.534089 systemd-logind[1940]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:17:17.552122 systemd[1]: Started sshd@6-172.31.21.110:22-139.178.89.65:44820.service - OpenSSH per-connection server daemon (139.178.89.65:44820). Jan 17 12:17:17.555924 systemd-logind[1940]: Removed session 6. Jan 17 12:17:17.733038 sshd[2248]: Accepted publickey for core from 139.178.89.65 port 44820 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:17:17.734729 sshd[2248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:17.739517 systemd-logind[1940]: New session 7 of user core. Jan 17 12:17:17.747461 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:17:17.898708 sudo[2251]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 12:17:17.899237 sudo[2251]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:17:17.918848 sudo[2251]: pam_unix(sudo:session): session closed for user root Jan 17 12:17:17.941959 sshd[2248]: pam_unix(sshd:session): session closed for user core Jan 17 12:17:17.948279 systemd[1]: sshd@6-172.31.21.110:22-139.178.89.65:44820.service: Deactivated successfully. Jan 17 12:17:17.950711 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:17:17.951583 systemd-logind[1940]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:17:17.953041 systemd-logind[1940]: Removed session 7. Jan 17 12:17:17.980906 systemd[1]: Started sshd@7-172.31.21.110:22-139.178.89.65:44822.service - OpenSSH per-connection server daemon (139.178.89.65:44822). Jan 17 12:17:18.180922 sshd[2256]: Accepted publickey for core from 139.178.89.65 port 44822 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:17:18.183899 sshd[2256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:18.200672 systemd-logind[1940]: New session 8 of user core. Jan 17 12:17:18.214405 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 12:17:18.328760 sudo[2260]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 12:17:18.329188 sudo[2260]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:17:18.338106 sudo[2260]: pam_unix(sudo:session): session closed for user root Jan 17 12:17:18.359449 sudo[2259]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 12:17:18.359865 sudo[2259]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:17:18.396873 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 12:17:18.411028 auditctl[2263]: No rules Jan 17 12:17:18.412464 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 12:17:18.417293 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 12:17:18.429745 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:17:18.532560 augenrules[2281]: No rules Jan 17 12:17:18.534409 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:17:18.538108 sudo[2259]: pam_unix(sudo:session): session closed for user root Jan 17 12:17:18.564736 sshd[2256]: pam_unix(sshd:session): session closed for user core Jan 17 12:17:18.569553 systemd[1]: sshd@7-172.31.21.110:22-139.178.89.65:44822.service: Deactivated successfully. Jan 17 12:17:18.577979 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 12:17:18.580068 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 12:17:18.587276 systemd-logind[1940]: Session 8 logged out. Waiting for processes to exit. Jan 17 12:17:18.612895 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:17:18.619623 systemd[1]: Started sshd@8-172.31.21.110:22-139.178.89.65:44834.service - OpenSSH per-connection server daemon (139.178.89.65:44834). Jan 17 12:17:18.624578 systemd-logind[1940]: Removed session 8. Jan 17 12:17:18.853425 sshd[2290]: Accepted publickey for core from 139.178.89.65 port 44834 ssh2: RSA SHA256:AjkUyOD8WQkRpQymIM0pevs1BX3RzdakAyVlC9NknRQ Jan 17 12:17:18.855345 sshd[2290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:17:18.863021 systemd-logind[1940]: New session 9 of user core. Jan 17 12:17:18.872531 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 12:17:18.894709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:17:18.900750 (kubelet)[2300]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:17:18.986632 sudo[2306]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:17:18.987560 sudo[2306]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:17:19.022940 kubelet[2300]: E0117 12:17:19.022164 2300 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:17:19.029038 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:17:19.029407 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:17:20.350619 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:17:20.369561 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:17:20.408592 systemd[1]: Reloading requested from client PID 2347 ('systemctl') (unit session-9.scope)... Jan 17 12:17:20.408610 systemd[1]: Reloading... Jan 17 12:17:20.582708 zram_generator::config[2387]: No configuration found. Jan 17 12:17:20.740758 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:17:20.843253 systemd[1]: Reloading finished in 433 ms. Jan 17 12:17:20.910391 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 12:17:20.910522 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 12:17:20.910848 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:17:20.930620 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:17:21.109552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:17:21.130839 (kubelet)[2446]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:17:21.188197 kubelet[2446]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:17:21.188197 kubelet[2446]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:17:21.188197 kubelet[2446]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:17:21.188197 kubelet[2446]: I0117 12:17:21.187667 2446 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:17:21.759932 kubelet[2446]: I0117 12:17:21.759896 2446 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:17:21.759932 kubelet[2446]: I0117 12:17:21.759926 2446 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:17:21.760242 kubelet[2446]: I0117 12:17:21.760220 2446 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:17:21.795745 kubelet[2446]: I0117 12:17:21.795709 2446 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:17:21.809886 kubelet[2446]: I0117 12:17:21.809845 2446 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:17:21.811212 kubelet[2446]: I0117 12:17:21.810113 2446 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:17:21.811212 kubelet[2446]: I0117 12:17:21.810409 2446 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:17:21.811212 kubelet[2446]: I0117 12:17:21.810438 2446 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:17:21.811212 kubelet[2446]: I0117 12:17:21.810452 2446 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:17:21.812851 kubelet[2446]: I0117 12:17:21.812821 2446 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:17:21.812984 kubelet[2446]: I0117 12:17:21.812965 2446 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:17:21.813070 kubelet[2446]: I0117 12:17:21.812993 2446 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:17:21.813070 kubelet[2446]: I0117 12:17:21.813048 2446 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:17:21.813070 kubelet[2446]: I0117 12:17:21.813064 2446 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:17:21.814873 kubelet[2446]: E0117 12:17:21.813535 2446 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:17:21.815676 kubelet[2446]: E0117 12:17:21.815242 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:17:21.816141 kubelet[2446]: I0117 12:17:21.816121 2446 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:17:21.821381 kubelet[2446]: I0117 12:17:21.821345 2446 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:17:21.823757 kubelet[2446]: W0117 12:17:21.823719 2446 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:17:21.824545 kubelet[2446]: I0117 12:17:21.824441 2446 server.go:1256] "Started kubelet" Jan 17 12:17:21.826212 kubelet[2446]: I0117 12:17:21.826034 2446 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:17:21.834203 kubelet[2446]: W0117 12:17:21.833423 2446 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 17 12:17:21.834203 kubelet[2446]: E0117 12:17:21.833464 2446 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 17 12:17:21.834203 kubelet[2446]: W0117 12:17:21.833551 2446 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "172.31.21.110" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 17 12:17:21.834203 kubelet[2446]: E0117 12:17:21.833568 2446 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.21.110" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 17 12:17:21.837374 kubelet[2446]: I0117 12:17:21.837343 2446 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:17:21.839470 kubelet[2446]: I0117 12:17:21.839361 2446 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:17:21.840858 kubelet[2446]: E0117 12:17:21.840830 2446 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.21.110.181b7a04af814f02 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.21.110,UID:172.31.21.110,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.21.110,},FirstTimestamp:2025-01-17 12:17:21.824403202 +0000 UTC m=+0.688769714,LastTimestamp:2025-01-17 12:17:21.824403202 +0000 UTC m=+0.688769714,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.21.110,}" Jan 17 12:17:21.843633 kubelet[2446]: I0117 12:17:21.843605 2446 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:17:21.844112 kubelet[2446]: I0117 12:17:21.844094 2446 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:17:21.848518 kubelet[2446]: I0117 12:17:21.848468 2446 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:17:21.848968 kubelet[2446]: E0117 12:17:21.848947 2446 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:17:21.849825 kubelet[2446]: I0117 12:17:21.849808 2446 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:17:21.849970 kubelet[2446]: I0117 12:17:21.849842 2446 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:17:21.852977 kubelet[2446]: E0117 12:17:21.852955 2446 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.21.110.181b7a04b0f770df default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.21.110,UID:172.31.21.110,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.31.21.110,},FirstTimestamp:2025-01-17 12:17:21.848922335 +0000 UTC m=+0.713288848,LastTimestamp:2025-01-17 12:17:21.848922335 +0000 UTC m=+0.713288848,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.21.110,}" Jan 17 12:17:21.853214 kubelet[2446]: W0117 12:17:21.853101 2446 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 17 12:17:21.853396 kubelet[2446]: E0117 12:17:21.853379 2446 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 17 12:17:21.853493 kubelet[2446]: E0117 12:17:21.853480 2446 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.21.110\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 17 12:17:21.854344 kubelet[2446]: I0117 12:17:21.854330 2446 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:17:21.854509 kubelet[2446]: I0117 12:17:21.854499 2446 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:17:21.856199 kubelet[2446]: I0117 12:17:21.854734 2446 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:17:21.877740 kubelet[2446]: I0117 12:17:21.877702 2446 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:17:21.877916 kubelet[2446]: I0117 12:17:21.877908 2446 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:17:21.878004 kubelet[2446]: I0117 12:17:21.877996 2446 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:17:21.880676 kubelet[2446]: I0117 12:17:21.880653 2446 policy_none.go:49] "None policy: Start" Jan 17 12:17:21.881800 kubelet[2446]: I0117 12:17:21.881774 2446 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:17:21.881963 kubelet[2446]: I0117 12:17:21.881953 2446 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:17:21.897665 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 12:17:21.912694 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 12:17:21.918440 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 12:17:21.925831 kubelet[2446]: I0117 12:17:21.925536 2446 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:17:21.926422 kubelet[2446]: I0117 12:17:21.926318 2446 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:17:21.931465 kubelet[2446]: E0117 12:17:21.931439 2446 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.21.110\" not found" Jan 17 12:17:21.951527 kubelet[2446]: I0117 12:17:21.951496 2446 kubelet_node_status.go:73] "Attempting to register node" node="172.31.21.110" Jan 17 12:17:21.956070 kubelet[2446]: I0117 12:17:21.956036 2446 kubelet_node_status.go:76] "Successfully registered node" node="172.31.21.110" Jan 17 12:17:21.966251 kubelet[2446]: I0117 12:17:21.966134 2446 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:17:21.968030 kubelet[2446]: I0117 12:17:21.968003 2446 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:17:21.968146 kubelet[2446]: I0117 12:17:21.968038 2446 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:17:21.968146 kubelet[2446]: I0117 12:17:21.968058 2446 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:17:21.968281 kubelet[2446]: E0117 12:17:21.968188 2446 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 17 12:17:21.971187 kubelet[2446]: E0117 12:17:21.971140 2446 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.21.110\" not found" Jan 17 12:17:22.071754 kubelet[2446]: E0117 12:17:22.071626 2446 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.21.110\" not found" Jan 17 12:17:22.172256 kubelet[2446]: E0117 12:17:22.172214 2446 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.21.110\" not found" Jan 17 12:17:22.272879 kubelet[2446]: E0117 12:17:22.272837 2446 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.21.110\" not found" Jan 17 12:17:22.373124 kubelet[2446]: E0117 12:17:22.372992 2446 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.21.110\" not found" Jan 17 12:17:22.473716 kubelet[2446]: E0117 12:17:22.473665 2446 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.21.110\" not found" Jan 17 12:17:22.574373 kubelet[2446]: E0117 12:17:22.574325 2446 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.21.110\" not found" Jan 17 12:17:22.640482 sudo[2306]: pam_unix(sudo:session): session closed for user root Jan 17 12:17:22.663088 sshd[2290]: pam_unix(sshd:session): session closed for user core Jan 17 12:17:22.667821 systemd-logind[1940]: Session 9 logged out. Waiting for processes to exit. Jan 17 12:17:22.668858 systemd[1]: sshd@8-172.31.21.110:22-139.178.89.65:44834.service: Deactivated successfully. Jan 17 12:17:22.671364 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 12:17:22.672512 systemd-logind[1940]: Removed session 9. Jan 17 12:17:22.674967 kubelet[2446]: E0117 12:17:22.674926 2446 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.21.110\" not found" Jan 17 12:17:22.763696 kubelet[2446]: I0117 12:17:22.763652 2446 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 17 12:17:22.763864 kubelet[2446]: W0117 12:17:22.763846 2446 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Jan 17 12:17:22.776315 kubelet[2446]: E0117 12:17:22.776268 2446 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.21.110\" not found" Jan 17 12:17:22.815674 kubelet[2446]: E0117 12:17:22.815620 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:17:22.876649 kubelet[2446]: E0117 12:17:22.876602 2446 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.21.110\" not found" Jan 17 12:17:22.977590 kubelet[2446]: E0117 12:17:22.977467 2446 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.21.110\" not found" Jan 17 12:17:23.079225 kubelet[2446]: I0117 12:17:23.079195 2446 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 17 12:17:23.079637 containerd[1955]: time="2025-01-17T12:17:23.079601680Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:17:23.080548 kubelet[2446]: I0117 12:17:23.080523 2446 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 17 12:17:23.815722 kubelet[2446]: I0117 12:17:23.815685 2446 apiserver.go:52] "Watching apiserver" Jan 17 12:17:23.816340 kubelet[2446]: E0117 12:17:23.815713 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:17:23.821266 kubelet[2446]: I0117 12:17:23.821224 2446 topology_manager.go:215] "Topology Admit Handler" podUID="7fce3788-01e9-4e7f-ad33-6fdeab5a6981" podNamespace="kube-system" podName="cilium-4q2bm" Jan 17 12:17:23.821395 kubelet[2446]: I0117 12:17:23.821379 2446 topology_manager.go:215] "Topology Admit Handler" podUID="2166d6cf-b989-4c22-863b-bf6c3dee14ad" podNamespace="kube-system" podName="kube-proxy-5ct5s" Jan 17 12:17:23.846491 systemd[1]: Created slice kubepods-besteffort-pod2166d6cf_b989_4c22_863b_bf6c3dee14ad.slice - libcontainer container kubepods-besteffort-pod2166d6cf_b989_4c22_863b_bf6c3dee14ad.slice. Jan 17 12:17:23.850326 kubelet[2446]: I0117 12:17:23.850270 2446 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:17:23.859291 systemd[1]: Created slice kubepods-burstable-pod7fce3788_01e9_4e7f_ad33_6fdeab5a6981.slice - libcontainer container kubepods-burstable-pod7fce3788_01e9_4e7f_ad33_6fdeab5a6981.slice. Jan 17 12:17:23.863099 kubelet[2446]: I0117 12:17:23.863069 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-lib-modules\") pod \"cilium-4q2bm\" (UID: \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\") " pod="kube-system/cilium-4q2bm" Jan 17 12:17:23.863929 kubelet[2446]: I0117 12:17:23.863115 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-xtables-lock\") pod \"cilium-4q2bm\" (UID: \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\") " pod="kube-system/cilium-4q2bm" Jan 17 12:17:23.863929 kubelet[2446]: I0117 12:17:23.863147 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2166d6cf-b989-4c22-863b-bf6c3dee14ad-xtables-lock\") pod \"kube-proxy-5ct5s\" (UID: \"2166d6cf-b989-4c22-863b-bf6c3dee14ad\") " pod="kube-system/kube-proxy-5ct5s" Jan 17 12:17:23.863929 kubelet[2446]: I0117 12:17:23.863187 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-cni-path\") pod \"cilium-4q2bm\" (UID: \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\") " pod="kube-system/cilium-4q2bm" Jan 17 12:17:23.863929 kubelet[2446]: I0117 12:17:23.863215 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-etc-cni-netd\") pod \"cilium-4q2bm\" (UID: \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\") " pod="kube-system/cilium-4q2bm" Jan 17 12:17:23.863929 kubelet[2446]: I0117 12:17:23.863244 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-cilium-cgroup\") pod \"cilium-4q2bm\" (UID: \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\") " pod="kube-system/cilium-4q2bm" Jan 17 12:17:23.863929 kubelet[2446]: I0117 12:17:23.863288 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-clustermesh-secrets\") pod \"cilium-4q2bm\" (UID: \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\") " pod="kube-system/cilium-4q2bm" Jan 17 12:17:23.864315 kubelet[2446]: I0117 12:17:23.863319 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-cilium-config-path\") pod \"cilium-4q2bm\" (UID: \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\") " pod="kube-system/cilium-4q2bm" Jan 17 12:17:23.864315 kubelet[2446]: I0117 12:17:23.863352 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-host-proc-sys-net\") pod \"cilium-4q2bm\" (UID: \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\") " pod="kube-system/cilium-4q2bm" Jan 17 12:17:23.864315 kubelet[2446]: I0117 12:17:23.863396 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64pqn\" (UniqueName: \"kubernetes.io/projected/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-kube-api-access-64pqn\") pod \"cilium-4q2bm\" (UID: \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\") " pod="kube-system/cilium-4q2bm" Jan 17 12:17:23.864315 kubelet[2446]: I0117 12:17:23.863491 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2166d6cf-b989-4c22-863b-bf6c3dee14ad-kube-proxy\") pod \"kube-proxy-5ct5s\" (UID: \"2166d6cf-b989-4c22-863b-bf6c3dee14ad\") " pod="kube-system/kube-proxy-5ct5s" Jan 17 12:17:23.864315 kubelet[2446]: I0117 12:17:23.863530 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-cilium-run\") pod \"cilium-4q2bm\" (UID: \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\") " pod="kube-system/cilium-4q2bm" Jan 17 12:17:23.864565 kubelet[2446]: I0117 12:17:23.864389 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-bpf-maps\") pod \"cilium-4q2bm\" (UID: \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\") " pod="kube-system/cilium-4q2bm" Jan 17 12:17:23.864565 kubelet[2446]: I0117 12:17:23.864452 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-host-proc-sys-kernel\") pod \"cilium-4q2bm\" (UID: \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\") " pod="kube-system/cilium-4q2bm" Jan 17 12:17:23.864565 kubelet[2446]: I0117 12:17:23.864486 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-hubble-tls\") pod \"cilium-4q2bm\" (UID: \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\") " pod="kube-system/cilium-4q2bm" Jan 17 12:17:23.864565 kubelet[2446]: I0117 12:17:23.864533 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs5zt\" (UniqueName: \"kubernetes.io/projected/2166d6cf-b989-4c22-863b-bf6c3dee14ad-kube-api-access-fs5zt\") pod \"kube-proxy-5ct5s\" (UID: \"2166d6cf-b989-4c22-863b-bf6c3dee14ad\") " pod="kube-system/kube-proxy-5ct5s" Jan 17 12:17:23.864716 kubelet[2446]: I0117 12:17:23.864575 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-hostproc\") pod \"cilium-4q2bm\" (UID: \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\") " pod="kube-system/cilium-4q2bm" Jan 17 12:17:23.864716 kubelet[2446]: I0117 12:17:23.864614 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2166d6cf-b989-4c22-863b-bf6c3dee14ad-lib-modules\") pod \"kube-proxy-5ct5s\" (UID: \"2166d6cf-b989-4c22-863b-bf6c3dee14ad\") " pod="kube-system/kube-proxy-5ct5s" Jan 17 12:17:24.156638 containerd[1955]: time="2025-01-17T12:17:24.156592509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5ct5s,Uid:2166d6cf-b989-4c22-863b-bf6c3dee14ad,Namespace:kube-system,Attempt:0,}" Jan 17 12:17:24.167718 containerd[1955]: time="2025-01-17T12:17:24.167397826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4q2bm,Uid:7fce3788-01e9-4e7f-ad33-6fdeab5a6981,Namespace:kube-system,Attempt:0,}" Jan 17 12:17:24.786570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount962871366.mount: Deactivated successfully. Jan 17 12:17:24.801281 containerd[1955]: time="2025-01-17T12:17:24.801230182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:17:24.802287 containerd[1955]: time="2025-01-17T12:17:24.802251581Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:17:24.804190 containerd[1955]: time="2025-01-17T12:17:24.804087896Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:17:24.804539 containerd[1955]: time="2025-01-17T12:17:24.804501612Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 12:17:24.807252 containerd[1955]: time="2025-01-17T12:17:24.807071332Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:17:24.810567 containerd[1955]: time="2025-01-17T12:17:24.810531711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:17:24.813192 containerd[1955]: time="2025-01-17T12:17:24.811749230Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 655.062478ms" Jan 17 12:17:24.814547 containerd[1955]: time="2025-01-17T12:17:24.814515806Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 647.020843ms" Jan 17 12:17:24.816581 kubelet[2446]: E0117 12:17:24.816552 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:17:25.088916 containerd[1955]: time="2025-01-17T12:17:25.088598092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:17:25.088916 containerd[1955]: time="2025-01-17T12:17:25.088673673Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:17:25.088916 containerd[1955]: time="2025-01-17T12:17:25.088690656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:17:25.088916 containerd[1955]: time="2025-01-17T12:17:25.088790990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:17:25.090643 containerd[1955]: time="2025-01-17T12:17:25.090186776Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:17:25.090643 containerd[1955]: time="2025-01-17T12:17:25.090465566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:17:25.090643 containerd[1955]: time="2025-01-17T12:17:25.090488142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:17:25.091277 containerd[1955]: time="2025-01-17T12:17:25.091107153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:17:25.315414 systemd[1]: Started cri-containerd-98b9f5606a35d7727c15eedf56baa2670932ec598bdd2d1b405f4fd89aff9352.scope - libcontainer container 98b9f5606a35d7727c15eedf56baa2670932ec598bdd2d1b405f4fd89aff9352. Jan 17 12:17:25.319343 systemd[1]: Started cri-containerd-c71af27aa2cbacfc5d8f4353dd851ed3bc61bb8e2496110db176b020aa488afd.scope - libcontainer container c71af27aa2cbacfc5d8f4353dd851ed3bc61bb8e2496110db176b020aa488afd. Jan 17 12:17:25.386974 containerd[1955]: time="2025-01-17T12:17:25.386887905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4q2bm,Uid:7fce3788-01e9-4e7f-ad33-6fdeab5a6981,Namespace:kube-system,Attempt:0,} returns sandbox id \"98b9f5606a35d7727c15eedf56baa2670932ec598bdd2d1b405f4fd89aff9352\"" Jan 17 12:17:25.393735 containerd[1955]: time="2025-01-17T12:17:25.393108800Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 17 12:17:25.400264 containerd[1955]: time="2025-01-17T12:17:25.400067887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5ct5s,Uid:2166d6cf-b989-4c22-863b-bf6c3dee14ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"c71af27aa2cbacfc5d8f4353dd851ed3bc61bb8e2496110db176b020aa488afd\"" Jan 17 12:17:25.816973 kubelet[2446]: E0117 12:17:25.816799 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:17:25.972924 systemd[1]: run-containerd-runc-k8s.io-c71af27aa2cbacfc5d8f4353dd851ed3bc61bb8e2496110db176b020aa488afd-runc.beyuEE.mount: Deactivated successfully. Jan 17 12:17:26.817141 kubelet[2446]: E0117 12:17:26.817047 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:17:27.817824 kubelet[2446]: E0117 12:17:27.817586 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:17:28.818691 kubelet[2446]: E0117 12:17:28.818634 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:17:29.819585 kubelet[2446]: E0117 12:17:29.819542 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:17:30.820476 kubelet[2446]: E0117 12:17:30.820439 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:17:30.897793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1730210483.mount: Deactivated successfully. Jan 17 12:17:31.821801 kubelet[2446]: E0117 12:17:31.821669 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:17:32.821930 kubelet[2446]: E0117 12:17:32.821832 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:17:33.416845 containerd[1955]: time="2025-01-17T12:17:33.416756208Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:33.418023 containerd[1955]: time="2025-01-17T12:17:33.417875104Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735327" Jan 17 12:17:33.419478 containerd[1955]: time="2025-01-17T12:17:33.419118026Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:33.420819 containerd[1955]: time="2025-01-17T12:17:33.420781074Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.027619089s" Jan 17 12:17:33.420974 containerd[1955]: time="2025-01-17T12:17:33.420933449Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 17 12:17:33.422401 containerd[1955]: time="2025-01-17T12:17:33.422252355Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\"" Jan 17 12:17:33.423529 containerd[1955]: time="2025-01-17T12:17:33.423490959Z" level=info msg="CreateContainer within sandbox \"98b9f5606a35d7727c15eedf56baa2670932ec598bdd2d1b405f4fd89aff9352\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 12:17:33.450613 containerd[1955]: time="2025-01-17T12:17:33.450567939Z" level=info msg="CreateContainer within sandbox \"98b9f5606a35d7727c15eedf56baa2670932ec598bdd2d1b405f4fd89aff9352\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8f5cc8d16c626859a06947c993a974ad8876cff62f1f1d19930ed1de1bdf91f7\"" Jan 17 12:17:33.451415 containerd[1955]: time="2025-01-17T12:17:33.451381716Z" level=info msg="StartContainer for \"8f5cc8d16c626859a06947c993a974ad8876cff62f1f1d19930ed1de1bdf91f7\"" Jan 17 12:17:33.486019 systemd[1]: run-containerd-runc-k8s.io-8f5cc8d16c626859a06947c993a974ad8876cff62f1f1d19930ed1de1bdf91f7-runc.WOcWLx.mount: Deactivated successfully. Jan 17 12:17:33.496469 systemd[1]: Started cri-containerd-8f5cc8d16c626859a06947c993a974ad8876cff62f1f1d19930ed1de1bdf91f7.scope - libcontainer container 8f5cc8d16c626859a06947c993a974ad8876cff62f1f1d19930ed1de1bdf91f7. Jan 17 12:17:33.535256 containerd[1955]: time="2025-01-17T12:17:33.534980796Z" level=info msg="StartContainer for \"8f5cc8d16c626859a06947c993a974ad8876cff62f1f1d19930ed1de1bdf91f7\" returns successfully" Jan 17 12:17:33.541888 systemd[1]: cri-containerd-8f5cc8d16c626859a06947c993a974ad8876cff62f1f1d19930ed1de1bdf91f7.scope: Deactivated successfully. Jan 17 12:17:33.823209 kubelet[2446]: E0117 12:17:33.822615 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:17:34.137861 containerd[1955]: time="2025-01-17T12:17:34.137534291Z" level=info msg="shim disconnected" id=8f5cc8d16c626859a06947c993a974ad8876cff62f1f1d19930ed1de1bdf91f7 namespace=k8s.io Jan 17 12:17:34.137861 containerd[1955]: time="2025-01-17T12:17:34.137674284Z" level=warning msg="cleaning up after shim disconnected" id=8f5cc8d16c626859a06947c993a974ad8876cff62f1f1d19930ed1de1bdf91f7 namespace=k8s.io Jan 17 12:17:34.137861 containerd[1955]: time="2025-01-17T12:17:34.137688948Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:17:34.156395 containerd[1955]: time="2025-01-17T12:17:34.156320873Z" level=warning msg="cleanup warnings time=\"2025-01-17T12:17:34Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 12:17:34.442406 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f5cc8d16c626859a06947c993a974ad8876cff62f1f1d19930ed1de1bdf91f7-rootfs.mount: Deactivated successfully. Jan 17 12:17:34.451520 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 17 12:17:34.823612 kubelet[2446]: E0117 12:17:34.823487 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:17:35.026391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3921357152.mount: Deactivated successfully. Jan 17 12:17:35.108203 containerd[1955]: time="2025-01-17T12:17:35.108010696Z" level=info msg="CreateContainer within sandbox \"98b9f5606a35d7727c15eedf56baa2670932ec598bdd2d1b405f4fd89aff9352\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 12:17:35.154151 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2792000506.mount: Deactivated successfully. Jan 17 12:17:35.163798 containerd[1955]: time="2025-01-17T12:17:35.163451095Z" level=info msg="CreateContainer within sandbox \"98b9f5606a35d7727c15eedf56baa2670932ec598bdd2d1b405f4fd89aff9352\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ad06adfbe2a30d7b19a22ced5082426bf5a5083c7ae0a94b2eb4d43f8f39c905\"" Jan 17 12:17:35.165654 containerd[1955]: time="2025-01-17T12:17:35.164373747Z" level=info msg="StartContainer for \"ad06adfbe2a30d7b19a22ced5082426bf5a5083c7ae0a94b2eb4d43f8f39c905\"" Jan 17 12:17:35.211610 systemd[1]: Started cri-containerd-ad06adfbe2a30d7b19a22ced5082426bf5a5083c7ae0a94b2eb4d43f8f39c905.scope - libcontainer container ad06adfbe2a30d7b19a22ced5082426bf5a5083c7ae0a94b2eb4d43f8f39c905. Jan 17 12:17:35.252017 containerd[1955]: time="2025-01-17T12:17:35.251864263Z" level=info msg="StartContainer for \"ad06adfbe2a30d7b19a22ced5082426bf5a5083c7ae0a94b2eb4d43f8f39c905\" returns successfully" Jan 17 12:17:35.268491 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:17:35.268841 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:17:35.268977 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:17:35.278679 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:17:35.282085 systemd[1]: cri-containerd-ad06adfbe2a30d7b19a22ced5082426bf5a5083c7ae0a94b2eb4d43f8f39c905.scope: Deactivated successfully. Jan 17 12:17:35.314325 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:17:35.425033 containerd[1955]: time="2025-01-17T12:17:35.424965450Z" level=info msg="shim disconnected" id=ad06adfbe2a30d7b19a22ced5082426bf5a5083c7ae0a94b2eb4d43f8f39c905 namespace=k8s.io Jan 17 12:17:35.425509 containerd[1955]: time="2025-01-17T12:17:35.425294861Z" level=warning msg="cleaning up after shim disconnected" id=ad06adfbe2a30d7b19a22ced5082426bf5a5083c7ae0a94b2eb4d43f8f39c905 namespace=k8s.io Jan 17 12:17:35.425509 containerd[1955]: time="2025-01-17T12:17:35.425313100Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:17:35.442572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3583050724.mount: Deactivated successfully. Jan 17 12:17:35.446983 containerd[1955]: time="2025-01-17T12:17:35.446938580Z" level=warning msg="cleanup warnings time=\"2025-01-17T12:17:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 12:17:35.824596 kubelet[2446]: E0117 12:17:35.824487 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:17:35.886780 containerd[1955]: time="2025-01-17T12:17:35.886720889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:35.887683 containerd[1955]: time="2025-01-17T12:17:35.887560246Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.13: active requests=0, bytes read=28620941" Jan 17 12:17:35.889842 containerd[1955]: time="2025-01-17T12:17:35.888836860Z" level=info msg="ImageCreate event name:\"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:35.891487 containerd[1955]: time="2025-01-17T12:17:35.891450409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:35.892121 containerd[1955]: time="2025-01-17T12:17:35.892085511Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.13\" with image id \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\", repo tag \"registry.k8s.io/kube-proxy:v1.29.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\", size \"28619960\" in 2.469796784s" Jan 17 12:17:35.892223 containerd[1955]: time="2025-01-17T12:17:35.892131926Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\" returns image reference \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\"" Jan 17 12:17:35.894213 containerd[1955]: time="2025-01-17T12:17:35.894162721Z" level=info msg="CreateContainer within sandbox \"c71af27aa2cbacfc5d8f4353dd851ed3bc61bb8e2496110db176b020aa488afd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:17:35.927402 containerd[1955]: time="2025-01-17T12:17:35.927364652Z" level=info msg="CreateContainer within sandbox \"c71af27aa2cbacfc5d8f4353dd851ed3bc61bb8e2496110db176b020aa488afd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"eb5dfac9c239956ca53a1873bc211d27c31c23f8d6bab59e3bd8438eb1385ac7\"" Jan 17 12:17:35.929048 containerd[1955]: time="2025-01-17T12:17:35.929004348Z" level=info msg="StartContainer for \"eb5dfac9c239956ca53a1873bc211d27c31c23f8d6bab59e3bd8438eb1385ac7\"" Jan 17 12:17:35.970390 systemd[1]: Started cri-containerd-eb5dfac9c239956ca53a1873bc211d27c31c23f8d6bab59e3bd8438eb1385ac7.scope - libcontainer container eb5dfac9c239956ca53a1873bc211d27c31c23f8d6bab59e3bd8438eb1385ac7. Jan 17 12:17:36.002246 containerd[1955]: time="2025-01-17T12:17:36.001159568Z" level=info msg="StartContainer for \"eb5dfac9c239956ca53a1873bc211d27c31c23f8d6bab59e3bd8438eb1385ac7\" returns successfully" Jan 17 12:17:36.139651 containerd[1955]: time="2025-01-17T12:17:36.135881385Z" level=info msg="CreateContainer within sandbox \"98b9f5606a35d7727c15eedf56baa2670932ec598bdd2d1b405f4fd89aff9352\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 12:17:36.170294 kubelet[2446]: I0117 12:17:36.168098 2446 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-5ct5s" podStartSLOduration=4.680692264 podStartE2EDuration="15.168042617s" podCreationTimestamp="2025-01-17 12:17:21 +0000 UTC" firstStartedPulling="2025-01-17 12:17:25.4050331 +0000 UTC m=+4.269399603" lastFinishedPulling="2025-01-17 12:17:35.892383454 +0000 UTC m=+14.756749956" observedRunningTime="2025-01-17 12:17:36.1324335 +0000 UTC m=+14.996800001" watchObservedRunningTime="2025-01-17 12:17:36.168042617 +0000 UTC m=+15.032409111" Jan 17 12:17:36.171306 containerd[1955]: time="2025-01-17T12:17:36.171257689Z" level=info msg="CreateContainer within sandbox \"98b9f5606a35d7727c15eedf56baa2670932ec598bdd2d1b405f4fd89aff9352\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a3bd89b97b057a1fbb6d8cc348055af58903b3c747d14cc64cd501d9db999ff9\"" Jan 17 12:17:36.175260 containerd[1955]: time="2025-01-17T12:17:36.175163887Z" level=info msg="StartContainer for \"a3bd89b97b057a1fbb6d8cc348055af58903b3c747d14cc64cd501d9db999ff9\"" Jan 17 12:17:36.249923 systemd[1]: Started cri-containerd-a3bd89b97b057a1fbb6d8cc348055af58903b3c747d14cc64cd501d9db999ff9.scope - libcontainer container a3bd89b97b057a1fbb6d8cc348055af58903b3c747d14cc64cd501d9db999ff9. Jan 17 12:17:36.340800 containerd[1955]: time="2025-01-17T12:17:36.340744384Z" level=info msg="StartContainer for \"a3bd89b97b057a1fbb6d8cc348055af58903b3c747d14cc64cd501d9db999ff9\" returns successfully" Jan 17 12:17:36.346349 systemd[1]: cri-containerd-a3bd89b97b057a1fbb6d8cc348055af58903b3c747d14cc64cd501d9db999ff9.scope: Deactivated successfully. Jan 17 12:17:36.441304 systemd[1]: run-containerd-runc-k8s.io-eb5dfac9c239956ca53a1873bc211d27c31c23f8d6bab59e3bd8438eb1385ac7-runc.eLfpLf.mount: Deactivated successfully. Jan 17 12:17:36.445572 containerd[1955]: time="2025-01-17T12:17:36.445281575Z" level=info msg="shim disconnected" id=a3bd89b97b057a1fbb6d8cc348055af58903b3c747d14cc64cd501d9db999ff9 namespace=k8s.io Jan 17 12:17:36.446276 containerd[1955]: time="2025-01-17T12:17:36.445553786Z" level=warning msg="cleaning up after shim disconnected" id=a3bd89b97b057a1fbb6d8cc348055af58903b3c747d14cc64cd501d9db999ff9 namespace=k8s.io Jan 17 12:17:36.446276 containerd[1955]: time="2025-01-17T12:17:36.445588902Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:17:36.825673 kubelet[2446]: E0117 12:17:36.825534 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:17:37.133446 containerd[1955]: time="2025-01-17T12:17:37.133406910Z" level=info msg="CreateContainer within sandbox \"98b9f5606a35d7727c15eedf56baa2670932ec598bdd2d1b405f4fd89aff9352\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 12:17:37.150661 containerd[1955]: time="2025-01-17T12:17:37.150612747Z" level=info msg="CreateContainer within sandbox \"98b9f5606a35d7727c15eedf56baa2670932ec598bdd2d1b405f4fd89aff9352\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9c367f801ea9c462a4bf0e90832d223c4297147ede99cfa0740ad44ed97ac544\"" Jan 17 12:17:37.152993 containerd[1955]: time="2025-01-17T12:17:37.151974205Z" level=info msg="StartContainer for \"9c367f801ea9c462a4bf0e90832d223c4297147ede99cfa0740ad44ed97ac544\"" Jan 17 12:17:37.225383 systemd[1]: Started cri-containerd-9c367f801ea9c462a4bf0e90832d223c4297147ede99cfa0740ad44ed97ac544.scope - libcontainer container 9c367f801ea9c462a4bf0e90832d223c4297147ede99cfa0740ad44ed97ac544. Jan 17 12:17:37.256021 systemd[1]: cri-containerd-9c367f801ea9c462a4bf0e90832d223c4297147ede99cfa0740ad44ed97ac544.scope: Deactivated successfully. Jan 17 12:17:37.259856 containerd[1955]: time="2025-01-17T12:17:37.257965869Z" level=info msg="StartContainer for \"9c367f801ea9c462a4bf0e90832d223c4297147ede99cfa0740ad44ed97ac544\" returns successfully" Jan 17 12:17:37.284500 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c367f801ea9c462a4bf0e90832d223c4297147ede99cfa0740ad44ed97ac544-rootfs.mount: Deactivated successfully. Jan 17 12:17:37.292593 containerd[1955]: time="2025-01-17T12:17:37.292529187Z" level=info msg="shim disconnected" id=9c367f801ea9c462a4bf0e90832d223c4297147ede99cfa0740ad44ed97ac544 namespace=k8s.io Jan 17 12:17:37.292593 containerd[1955]: time="2025-01-17T12:17:37.292585916Z" level=warning msg="cleaning up after shim disconnected" id=9c367f801ea9c462a4bf0e90832d223c4297147ede99cfa0740ad44ed97ac544 namespace=k8s.io Jan 17 12:17:37.292593 containerd[1955]: time="2025-01-17T12:17:37.292597321Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:17:37.309553 containerd[1955]: time="2025-01-17T12:17:37.309509734Z" level=warning msg="cleanup warnings time=\"2025-01-17T12:17:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 12:17:37.826632 kubelet[2446]: E0117 12:17:37.826591 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:17:38.137790 containerd[1955]: time="2025-01-17T12:17:38.137749176Z" level=info msg="CreateContainer within sandbox \"98b9f5606a35d7727c15eedf56baa2670932ec598bdd2d1b405f4fd89aff9352\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 12:17:38.159266 containerd[1955]: time="2025-01-17T12:17:38.159134813Z" level=info msg="CreateContainer within sandbox \"98b9f5606a35d7727c15eedf56baa2670932ec598bdd2d1b405f4fd89aff9352\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7465f682d7d231c0d371dae93a055f46ec2c734514e11a3510e7eb8a31612c59\"" Jan 17 12:17:38.161509 containerd[1955]: time="2025-01-17T12:17:38.160095598Z" level=info msg="StartContainer for \"7465f682d7d231c0d371dae93a055f46ec2c734514e11a3510e7eb8a31612c59\"" Jan 17 12:17:38.205366 systemd[1]: Started cri-containerd-7465f682d7d231c0d371dae93a055f46ec2c734514e11a3510e7eb8a31612c59.scope - libcontainer container 7465f682d7d231c0d371dae93a055f46ec2c734514e11a3510e7eb8a31612c59. Jan 17 12:17:38.242580 containerd[1955]: time="2025-01-17T12:17:38.242531931Z" level=info msg="StartContainer for \"7465f682d7d231c0d371dae93a055f46ec2c734514e11a3510e7eb8a31612c59\" returns successfully" Jan 17 12:17:38.397528 kubelet[2446]: I0117 12:17:38.397422 2446 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 17 12:17:38.825293 kernel: Initializing XFRM netlink socket Jan 17 12:17:38.827597 kubelet[2446]: E0117 12:17:38.827531 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:17:39.828107 kubelet[2446]: E0117 12:17:39.828056 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:17:40.528539 systemd-networkd[1808]: cilium_host: Link UP Jan 17 12:17:40.530158 systemd-networkd[1808]: cilium_net: Link UP Jan 17 12:17:40.530418 systemd-networkd[1808]: cilium_net: Gained carrier Jan 17 12:17:40.530616 systemd-networkd[1808]: cilium_host: Gained carrier Jan 17 12:17:40.537420 (udev-worker)[3130]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:17:40.537971 (udev-worker)[2882]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:17:40.564386 systemd-networkd[1808]: cilium_net: Gained IPv6LL Jan 17 12:17:40.681054 (udev-worker)[3154]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:17:40.691236 systemd-networkd[1808]: cilium_vxlan: Link UP Jan 17 12:17:40.691250 systemd-networkd[1808]: cilium_vxlan: Gained carrier Jan 17 12:17:40.829267 kubelet[2446]: E0117 12:17:40.828531 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:17:40.874240 kubelet[2446]: I0117 12:17:40.874196 2446 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-4q2bm" podStartSLOduration=11.84382512 podStartE2EDuration="19.874116043s" podCreationTimestamp="2025-01-17 12:17:21 +0000 UTC" firstStartedPulling="2025-01-17 12:17:25.391203672 +0000 UTC m=+4.255570188" lastFinishedPulling="2025-01-17 12:17:33.42149462 +0000 UTC m=+12.285861111" observedRunningTime="2025-01-17 12:17:39.170364651 +0000 UTC m=+18.034731167" watchObservedRunningTime="2025-01-17 12:17:40.874116043 +0000 UTC m=+19.738482549" Jan 17 12:17:40.874787 kubelet[2446]: I0117 12:17:40.874759 2446 topology_manager.go:215] "Topology Admit Handler" podUID="332cdf3a-9c15-44f5-b4bb-399c44f4e8be" podNamespace="default" podName="nginx-deployment-6d5f899847-tp97b" Jan 17 12:17:40.896390 systemd[1]: Created slice kubepods-besteffort-pod332cdf3a_9c15_44f5_b4bb_399c44f4e8be.slice - libcontainer container kubepods-besteffort-pod332cdf3a_9c15_44f5_b4bb_399c44f4e8be.slice. Jan 17 12:17:40.902194 kubelet[2446]: I0117 12:17:40.901057 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szs6p\" (UniqueName: \"kubernetes.io/projected/332cdf3a-9c15-44f5-b4bb-399c44f4e8be-kube-api-access-szs6p\") pod \"nginx-deployment-6d5f899847-tp97b\" (UID: \"332cdf3a-9c15-44f5-b4bb-399c44f4e8be\") " pod="default/nginx-deployment-6d5f899847-tp97b" Jan 17 12:17:41.018470 systemd-networkd[1808]: cilium_host: Gained IPv6LL Jan 17 12:17:41.038591 kernel: NET: Registered PF_ALG protocol family Jan 17 12:17:41.200979 containerd[1955]: time="2025-01-17T12:17:41.200920067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-tp97b,Uid:332cdf3a-9c15-44f5-b4bb-399c44f4e8be,Namespace:default,Attempt:0,}" Jan 17 12:17:41.813225 kubelet[2446]: E0117 12:17:41.813150 2446 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:17:41.828823 kubelet[2446]: E0117 12:17:41.828758 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:17:42.019653 systemd-networkd[1808]: lxc_health: Link UP Jan 17 12:17:42.042822 systemd-networkd[1808]: lxc_health: Gained carrier Jan 17 12:17:42.323836 systemd-networkd[1808]: lxcfa3c6858534b: Link UP Jan 17 12:17:42.329083 kernel: eth0: renamed from tmpf70d5 Jan 17 12:17:42.331795 systemd-networkd[1808]: lxcfa3c6858534b: Gained carrier Jan 17 12:17:42.610338 systemd-networkd[1808]: cilium_vxlan: Gained IPv6LL Jan 17 12:17:42.830018 kubelet[2446]: E0117 12:17:42.829969 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:17:43.826528 systemd-networkd[1808]: lxc_health: Gained IPv6LL Jan 17 12:17:43.830963 kubelet[2446]: E0117 12:17:43.830881 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:17:44.216006 systemd-networkd[1808]: lxcfa3c6858534b: Gained IPv6LL Jan 17 12:17:44.835597 kubelet[2446]: E0117 12:17:44.835530 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:17:45.837299 kubelet[2446]: E0117 12:17:45.836939 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:17:46.838207 kubelet[2446]: E0117 12:17:46.837739 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:17:47.206039 ntpd[1936]: Listen normally on 6 cilium_host 192.168.1.240:123 Jan 17 12:17:47.206141 ntpd[1936]: Listen normally on 7 cilium_net [fe80::f816:cff:feaf:3d%3]:123 Jan 17 12:17:47.206817 ntpd[1936]: 17 Jan 12:17:47 ntpd[1936]: Listen normally on 6 cilium_host 192.168.1.240:123 Jan 17 12:17:47.206817 ntpd[1936]: 17 Jan 12:17:47 ntpd[1936]: Listen normally on 7 cilium_net [fe80::f816:cff:feaf:3d%3]:123 Jan 17 12:17:47.206817 ntpd[1936]: 17 Jan 12:17:47 ntpd[1936]: Listen normally on 8 cilium_host [fe80::5c30:3aff:fe35:8b08%4]:123 Jan 17 12:17:47.206817 ntpd[1936]: 17 Jan 12:17:47 ntpd[1936]: Listen normally on 9 cilium_vxlan [fe80::ac5f:7bff:fe81:3104%5]:123 Jan 17 12:17:47.206817 ntpd[1936]: 17 Jan 12:17:47 ntpd[1936]: Listen normally on 10 lxc_health [fe80::600d:1aff:fee8:c112%7]:123 Jan 17 12:17:47.206817 ntpd[1936]: 17 Jan 12:17:47 ntpd[1936]: Listen normally on 11 lxcfa3c6858534b [fe80::d0b7:aff:fee3:ecd%9]:123 Jan 17 12:17:47.206462 ntpd[1936]: Listen normally on 8 cilium_host [fe80::5c30:3aff:fe35:8b08%4]:123 Jan 17 12:17:47.206518 ntpd[1936]: Listen normally on 9 cilium_vxlan [fe80::ac5f:7bff:fe81:3104%5]:123 Jan 17 12:17:47.206559 ntpd[1936]: Listen normally on 10 lxc_health [fe80::600d:1aff:fee8:c112%7]:123 Jan 17 12:17:47.206597 ntpd[1936]: Listen normally on 11 lxcfa3c6858534b [fe80::d0b7:aff:fee3:ecd%9]:123 Jan 17 12:17:47.838468 kubelet[2446]: E0117 12:17:47.838395 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:17:48.057429 containerd[1955]: time="2025-01-17T12:17:48.057161608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:17:48.059755 containerd[1955]: time="2025-01-17T12:17:48.058418864Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:17:48.059755 containerd[1955]: time="2025-01-17T12:17:48.058453902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:17:48.059755 containerd[1955]: time="2025-01-17T12:17:48.058563607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:17:48.176953 systemd[1]: Started cri-containerd-f70d57d44c6ca68811afe722673f633697004c338fef22da1134dc0c7d97fea9.scope - libcontainer container f70d57d44c6ca68811afe722673f633697004c338fef22da1134dc0c7d97fea9. Jan 17 12:17:48.248079 containerd[1955]: time="2025-01-17T12:17:48.248025832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-tp97b,Uid:332cdf3a-9c15-44f5-b4bb-399c44f4e8be,Namespace:default,Attempt:0,} returns sandbox id \"f70d57d44c6ca68811afe722673f633697004c338fef22da1134dc0c7d97fea9\"" Jan 17 12:17:48.251768 containerd[1955]: time="2025-01-17T12:17:48.251731965Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 17 12:17:48.277454 update_engine[1942]: I20250117 12:17:48.276628 1942 update_attempter.cc:509] Updating boot flags... Jan 17 12:17:48.337572 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3554) Jan 17 12:17:48.598198 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3545) Jan 17 12:17:48.817223 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3545) Jan 17 12:17:48.839757 kubelet[2446]: E0117 12:17:48.839527 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:17:49.840475 kubelet[2446]: E0117 12:17:49.840414 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:17:50.840793 kubelet[2446]: E0117 12:17:50.840753 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:17:51.055881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1649640150.mount: Deactivated successfully. Jan 17 12:17:51.842192 kubelet[2446]: E0117 12:17:51.842142 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:17:52.750442 containerd[1955]: time="2025-01-17T12:17:52.750383500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:52.751860 containerd[1955]: time="2025-01-17T12:17:52.751733284Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036018" Jan 17 12:17:52.752826 containerd[1955]: time="2025-01-17T12:17:52.752769016Z" level=info msg="ImageCreate event name:\"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:52.765641 containerd[1955]: time="2025-01-17T12:17:52.764510794Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:52.765641 containerd[1955]: time="2025-01-17T12:17:52.765497227Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 4.513719954s" Jan 17 12:17:52.765641 containerd[1955]: time="2025-01-17T12:17:52.765537865Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 17 12:17:52.770052 containerd[1955]: time="2025-01-17T12:17:52.770018525Z" level=info msg="CreateContainer within sandbox \"f70d57d44c6ca68811afe722673f633697004c338fef22da1134dc0c7d97fea9\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 17 12:17:52.786811 containerd[1955]: time="2025-01-17T12:17:52.786768054Z" level=info msg="CreateContainer within sandbox \"f70d57d44c6ca68811afe722673f633697004c338fef22da1134dc0c7d97fea9\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"c5f8be271facf825feac9936b43b69a52f046c5e002d88f1df0f23cec62a2048\"" Jan 17 12:17:52.787458 containerd[1955]: time="2025-01-17T12:17:52.787411801Z" level=info msg="StartContainer for \"c5f8be271facf825feac9936b43b69a52f046c5e002d88f1df0f23cec62a2048\"" Jan 17 12:17:52.848637 kubelet[2446]: E0117 12:17:52.843725 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:17:52.865658 systemd[1]: Started cri-containerd-c5f8be271facf825feac9936b43b69a52f046c5e002d88f1df0f23cec62a2048.scope - libcontainer container c5f8be271facf825feac9936b43b69a52f046c5e002d88f1df0f23cec62a2048. Jan 17 12:17:52.896553 containerd[1955]: time="2025-01-17T12:17:52.896469742Z" level=info msg="StartContainer for \"c5f8be271facf825feac9936b43b69a52f046c5e002d88f1df0f23cec62a2048\" returns successfully" Jan 17 12:17:53.844679 kubelet[2446]: E0117 12:17:53.844629 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:17:54.845734 kubelet[2446]: E0117 12:17:54.845682 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:17:55.846379 kubelet[2446]: E0117 12:17:55.846330 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:17:56.846497 kubelet[2446]: E0117 12:17:56.846441 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:17:57.846783 kubelet[2446]: E0117 12:17:57.846697 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:17:58.847659 kubelet[2446]: E0117 12:17:58.847611 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:17:59.847782 kubelet[2446]: E0117 12:17:59.847730 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:00.848087 kubelet[2446]: E0117 12:18:00.848032 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:01.433959 kubelet[2446]: I0117 12:18:01.433915 2446 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-tp97b" podStartSLOduration=16.916711407 podStartE2EDuration="21.433860828s" podCreationTimestamp="2025-01-17 12:17:40 +0000 UTC" firstStartedPulling="2025-01-17 12:17:48.250439268 +0000 UTC m=+27.114805760" lastFinishedPulling="2025-01-17 12:17:52.767588678 +0000 UTC m=+31.631955181" observedRunningTime="2025-01-17 12:17:53.209865252 +0000 UTC m=+32.074231764" watchObservedRunningTime="2025-01-17 12:18:01.433860828 +0000 UTC m=+40.298227340" Jan 17 12:18:01.434215 kubelet[2446]: I0117 12:18:01.434060 2446 topology_manager.go:215] "Topology Admit Handler" podUID="f8593b51-e697-4a38-92b7-803b6d0f87b1" podNamespace="default" podName="nfs-server-provisioner-0" Jan 17 12:18:01.447943 systemd[1]: Created slice kubepods-besteffort-podf8593b51_e697_4a38_92b7_803b6d0f87b1.slice - libcontainer container kubepods-besteffort-podf8593b51_e697_4a38_92b7_803b6d0f87b1.slice. Jan 17 12:18:01.548012 kubelet[2446]: I0117 12:18:01.547907 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr7v9\" (UniqueName: \"kubernetes.io/projected/f8593b51-e697-4a38-92b7-803b6d0f87b1-kube-api-access-vr7v9\") pod \"nfs-server-provisioner-0\" (UID: \"f8593b51-e697-4a38-92b7-803b6d0f87b1\") " pod="default/nfs-server-provisioner-0" Jan 17 12:18:01.548012 kubelet[2446]: I0117 12:18:01.547974 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/f8593b51-e697-4a38-92b7-803b6d0f87b1-data\") pod \"nfs-server-provisioner-0\" (UID: \"f8593b51-e697-4a38-92b7-803b6d0f87b1\") " pod="default/nfs-server-provisioner-0" Jan 17 12:18:01.756864 containerd[1955]: time="2025-01-17T12:18:01.756722282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:f8593b51-e697-4a38-92b7-803b6d0f87b1,Namespace:default,Attempt:0,}" Jan 17 12:18:01.820340 kubelet[2446]: E0117 12:18:01.819102 2446 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:01.851252 kubelet[2446]: E0117 12:18:01.851046 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:01.996824 systemd-networkd[1808]: lxc378724a6cb68: Link UP Jan 17 12:18:02.019982 (udev-worker)[3895]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:18:02.031207 kernel: eth0: renamed from tmp428e9 Jan 17 12:18:02.061709 systemd-networkd[1808]: lxc378724a6cb68: Gained carrier Jan 17 12:18:02.633103 containerd[1955]: time="2025-01-17T12:18:02.632892669Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:02.633103 containerd[1955]: time="2025-01-17T12:18:02.632969464Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:02.633103 containerd[1955]: time="2025-01-17T12:18:02.632993679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:02.633602 containerd[1955]: time="2025-01-17T12:18:02.633344749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:02.678455 systemd[1]: Started cri-containerd-428e9de88e47b7d8648d36a707557f535baa5ec0fa5c022f45d8b8b4b8a460d2.scope - libcontainer container 428e9de88e47b7d8648d36a707557f535baa5ec0fa5c022f45d8b8b4b8a460d2. Jan 17 12:18:02.806207 containerd[1955]: time="2025-01-17T12:18:02.806137927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:f8593b51-e697-4a38-92b7-803b6d0f87b1,Namespace:default,Attempt:0,} returns sandbox id \"428e9de88e47b7d8648d36a707557f535baa5ec0fa5c022f45d8b8b4b8a460d2\"" Jan 17 12:18:02.813367 containerd[1955]: time="2025-01-17T12:18:02.812615558Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 17 12:18:02.851893 kubelet[2446]: E0117 12:18:02.851843 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:03.733443 systemd-networkd[1808]: lxc378724a6cb68: Gained IPv6LL Jan 17 12:18:03.856259 kubelet[2446]: E0117 12:18:03.853391 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:04.854705 kubelet[2446]: E0117 12:18:04.854368 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:05.856221 kubelet[2446]: E0117 12:18:05.855098 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:06.102051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount679149190.mount: Deactivated successfully. Jan 17 12:18:06.205952 ntpd[1936]: Listen normally on 12 lxc378724a6cb68 [fe80::e414:2ff:fe17:5309%11]:123 Jan 17 12:18:06.206633 ntpd[1936]: 17 Jan 12:18:06 ntpd[1936]: Listen normally on 12 lxc378724a6cb68 [fe80::e414:2ff:fe17:5309%11]:123 Jan 17 12:18:06.855643 kubelet[2446]: E0117 12:18:06.855235 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:07.855974 kubelet[2446]: E0117 12:18:07.855928 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:08.607496 containerd[1955]: time="2025-01-17T12:18:08.607444429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:08.610079 containerd[1955]: time="2025-01-17T12:18:08.610016696Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 17 12:18:08.612016 containerd[1955]: time="2025-01-17T12:18:08.611494174Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:08.616031 containerd[1955]: time="2025-01-17T12:18:08.615962243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:08.617200 containerd[1955]: time="2025-01-17T12:18:08.616997791Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 5.803623463s" Jan 17 12:18:08.617200 containerd[1955]: time="2025-01-17T12:18:08.617050137Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 17 12:18:08.624195 containerd[1955]: time="2025-01-17T12:18:08.622022107Z" level=info msg="CreateContainer within sandbox \"428e9de88e47b7d8648d36a707557f535baa5ec0fa5c022f45d8b8b4b8a460d2\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 17 12:18:08.640029 containerd[1955]: time="2025-01-17T12:18:08.639987725Z" level=info msg="CreateContainer within sandbox \"428e9de88e47b7d8648d36a707557f535baa5ec0fa5c022f45d8b8b4b8a460d2\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"150adcc1bc9ac4dac23e999e772b2f11f30b0a82bfcdfcd4dae02c714f09b989\"" Jan 17 12:18:08.640831 containerd[1955]: time="2025-01-17T12:18:08.640787865Z" level=info msg="StartContainer for \"150adcc1bc9ac4dac23e999e772b2f11f30b0a82bfcdfcd4dae02c714f09b989\"" Jan 17 12:18:08.679739 systemd[1]: run-containerd-runc-k8s.io-150adcc1bc9ac4dac23e999e772b2f11f30b0a82bfcdfcd4dae02c714f09b989-runc.JGkjM5.mount: Deactivated successfully. Jan 17 12:18:08.690608 systemd[1]: Started cri-containerd-150adcc1bc9ac4dac23e999e772b2f11f30b0a82bfcdfcd4dae02c714f09b989.scope - libcontainer container 150adcc1bc9ac4dac23e999e772b2f11f30b0a82bfcdfcd4dae02c714f09b989. Jan 17 12:18:08.738159 containerd[1955]: time="2025-01-17T12:18:08.738105194Z" level=info msg="StartContainer for \"150adcc1bc9ac4dac23e999e772b2f11f30b0a82bfcdfcd4dae02c714f09b989\" returns successfully" Jan 17 12:18:08.857262 kubelet[2446]: E0117 12:18:08.856871 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:09.364974 kubelet[2446]: I0117 12:18:09.364921 2446 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.558378375 podStartE2EDuration="8.364874108s" podCreationTimestamp="2025-01-17 12:18:01 +0000 UTC" firstStartedPulling="2025-01-17 12:18:02.811262993 +0000 UTC m=+41.675629494" lastFinishedPulling="2025-01-17 12:18:08.617758735 +0000 UTC m=+47.482125227" observedRunningTime="2025-01-17 12:18:09.364459347 +0000 UTC m=+48.228825865" watchObservedRunningTime="2025-01-17 12:18:09.364874108 +0000 UTC m=+48.229240620" Jan 17 12:18:09.857228 kubelet[2446]: E0117 12:18:09.857133 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:10.858331 kubelet[2446]: E0117 12:18:10.858279 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:11.858744 kubelet[2446]: E0117 12:18:11.858694 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:12.858957 kubelet[2446]: E0117 12:18:12.858895 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:13.861520 kubelet[2446]: E0117 12:18:13.861469 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:14.862247 kubelet[2446]: E0117 12:18:14.862199 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:15.862468 kubelet[2446]: E0117 12:18:15.862407 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:16.862721 kubelet[2446]: E0117 12:18:16.862662 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:17.863196 kubelet[2446]: E0117 12:18:17.863143 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:18.537586 kubelet[2446]: I0117 12:18:18.537543 2446 topology_manager.go:215] "Topology Admit Handler" podUID="a3576da7-bab2-419d-bbcf-1f8859790a20" podNamespace="default" podName="test-pod-1" Jan 17 12:18:18.553206 systemd[1]: Created slice kubepods-besteffort-poda3576da7_bab2_419d_bbcf_1f8859790a20.slice - libcontainer container kubepods-besteffort-poda3576da7_bab2_419d_bbcf_1f8859790a20.slice. Jan 17 12:18:18.591456 kubelet[2446]: I0117 12:18:18.591276 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg2vq\" (UniqueName: \"kubernetes.io/projected/a3576da7-bab2-419d-bbcf-1f8859790a20-kube-api-access-hg2vq\") pod \"test-pod-1\" (UID: \"a3576da7-bab2-419d-bbcf-1f8859790a20\") " pod="default/test-pod-1" Jan 17 12:18:18.591616 kubelet[2446]: I0117 12:18:18.591491 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0096a6b0-23cb-438a-bd3b-641e1f4a83af\" (UniqueName: \"kubernetes.io/nfs/a3576da7-bab2-419d-bbcf-1f8859790a20-pvc-0096a6b0-23cb-438a-bd3b-641e1f4a83af\") pod \"test-pod-1\" (UID: \"a3576da7-bab2-419d-bbcf-1f8859790a20\") " pod="default/test-pod-1" Jan 17 12:18:18.747394 kernel: FS-Cache: Loaded Jan 17 12:18:18.831488 kernel: RPC: Registered named UNIX socket transport module. Jan 17 12:18:18.831634 kernel: RPC: Registered udp transport module. Jan 17 12:18:18.831678 kernel: RPC: Registered tcp transport module. Jan 17 12:18:18.832579 kernel: RPC: Registered tcp-with-tls transport module. Jan 17 12:18:18.832655 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 17 12:18:18.864148 kubelet[2446]: E0117 12:18:18.864108 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:19.369537 kernel: NFS: Registering the id_resolver key type Jan 17 12:18:19.369735 kernel: Key type id_resolver registered Jan 17 12:18:19.369770 kernel: Key type id_legacy registered Jan 17 12:18:19.510267 nfsidmap[4084]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 17 12:18:19.515460 nfsidmap[4085]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 17 12:18:19.773257 containerd[1955]: time="2025-01-17T12:18:19.773206983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:a3576da7-bab2-419d-bbcf-1f8859790a20,Namespace:default,Attempt:0,}" Jan 17 12:18:19.808816 (udev-worker)[4071]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:18:19.809021 (udev-worker)[4079]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:18:19.809919 systemd-networkd[1808]: lxc612e7fbd76c3: Link UP Jan 17 12:18:19.822209 kernel: eth0: renamed from tmp7e6b6 Jan 17 12:18:19.829581 systemd-networkd[1808]: lxc612e7fbd76c3: Gained carrier Jan 17 12:18:19.864710 kubelet[2446]: E0117 12:18:19.864659 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:20.370856 containerd[1955]: time="2025-01-17T12:18:20.370745513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:20.370856 containerd[1955]: time="2025-01-17T12:18:20.370815420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:20.371614 containerd[1955]: time="2025-01-17T12:18:20.370837406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:20.371614 containerd[1955]: time="2025-01-17T12:18:20.371425159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:20.451516 systemd[1]: Started cri-containerd-7e6b625ac9291ee650dae58b1e2d31593476433af11b85ab21aed2a986fb1b30.scope - libcontainer container 7e6b625ac9291ee650dae58b1e2d31593476433af11b85ab21aed2a986fb1b30. Jan 17 12:18:20.514246 containerd[1955]: time="2025-01-17T12:18:20.514201594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:a3576da7-bab2-419d-bbcf-1f8859790a20,Namespace:default,Attempt:0,} returns sandbox id \"7e6b625ac9291ee650dae58b1e2d31593476433af11b85ab21aed2a986fb1b30\"" Jan 17 12:18:20.516078 containerd[1955]: time="2025-01-17T12:18:20.516042148Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 17 12:18:20.865469 kubelet[2446]: E0117 12:18:20.865408 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:20.869774 containerd[1955]: time="2025-01-17T12:18:20.869721203Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:20.870902 containerd[1955]: time="2025-01-17T12:18:20.870826842Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 17 12:18:20.874720 containerd[1955]: time="2025-01-17T12:18:20.874682066Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 358.601671ms" Jan 17 12:18:20.874720 containerd[1955]: time="2025-01-17T12:18:20.874719683Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 17 12:18:20.879545 containerd[1955]: time="2025-01-17T12:18:20.879499183Z" level=info msg="CreateContainer within sandbox \"7e6b625ac9291ee650dae58b1e2d31593476433af11b85ab21aed2a986fb1b30\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 17 12:18:20.882840 systemd-networkd[1808]: lxc612e7fbd76c3: Gained IPv6LL Jan 17 12:18:20.941984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3982407056.mount: Deactivated successfully. Jan 17 12:18:21.011541 containerd[1955]: time="2025-01-17T12:18:21.011445465Z" level=info msg="CreateContainer within sandbox \"7e6b625ac9291ee650dae58b1e2d31593476433af11b85ab21aed2a986fb1b30\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"2a35df293bc08adc805682c79dfd9d2d360e0d44edcad57e62b2142b45e76e95\"" Jan 17 12:18:21.014691 containerd[1955]: time="2025-01-17T12:18:21.013612847Z" level=info msg="StartContainer for \"2a35df293bc08adc805682c79dfd9d2d360e0d44edcad57e62b2142b45e76e95\"" Jan 17 12:18:21.059440 systemd[1]: Started cri-containerd-2a35df293bc08adc805682c79dfd9d2d360e0d44edcad57e62b2142b45e76e95.scope - libcontainer container 2a35df293bc08adc805682c79dfd9d2d360e0d44edcad57e62b2142b45e76e95. Jan 17 12:18:21.097507 containerd[1955]: time="2025-01-17T12:18:21.096295806Z" level=info msg="StartContainer for \"2a35df293bc08adc805682c79dfd9d2d360e0d44edcad57e62b2142b45e76e95\" returns successfully" Jan 17 12:18:21.814057 kubelet[2446]: E0117 12:18:21.813976 2446 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:21.865989 kubelet[2446]: E0117 12:18:21.865944 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:22.866833 kubelet[2446]: E0117 12:18:22.866718 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:23.206141 ntpd[1936]: Listen normally on 13 lxc612e7fbd76c3 [fe80::9cf3:68ff:fe05:56fe%13]:123 Jan 17 12:18:23.206580 ntpd[1936]: 17 Jan 12:18:23 ntpd[1936]: Listen normally on 13 lxc612e7fbd76c3 [fe80::9cf3:68ff:fe05:56fe%13]:123 Jan 17 12:18:23.867318 kubelet[2446]: E0117 12:18:23.867267 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:24.867689 kubelet[2446]: E0117 12:18:24.867628 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:25.868445 kubelet[2446]: E0117 12:18:25.868392 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:26.869640 kubelet[2446]: E0117 12:18:26.869574 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:27.437363 kubelet[2446]: I0117 12:18:27.437322 2446 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=25.077789769 podStartE2EDuration="25.437281992s" podCreationTimestamp="2025-01-17 12:18:02 +0000 UTC" firstStartedPulling="2025-01-17 12:18:20.515621185 +0000 UTC m=+59.379987690" lastFinishedPulling="2025-01-17 12:18:20.875113422 +0000 UTC m=+59.739479913" observedRunningTime="2025-01-17 12:18:21.413899377 +0000 UTC m=+60.278265889" watchObservedRunningTime="2025-01-17 12:18:27.437281992 +0000 UTC m=+66.301648509" Jan 17 12:18:27.467157 systemd[1]: run-containerd-runc-k8s.io-7465f682d7d231c0d371dae93a055f46ec2c734514e11a3510e7eb8a31612c59-runc.1NsL2h.mount: Deactivated successfully. Jan 17 12:18:27.780116 containerd[1955]: time="2025-01-17T12:18:27.779957667Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:18:27.870990 kubelet[2446]: E0117 12:18:27.870888 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:27.876678 containerd[1955]: time="2025-01-17T12:18:27.876618323Z" level=info msg="StopContainer for \"7465f682d7d231c0d371dae93a055f46ec2c734514e11a3510e7eb8a31612c59\" with timeout 2 (s)" Jan 17 12:18:27.878318 containerd[1955]: time="2025-01-17T12:18:27.878281428Z" level=info msg="Stop container \"7465f682d7d231c0d371dae93a055f46ec2c734514e11a3510e7eb8a31612c59\" with signal terminated" Jan 17 12:18:27.899951 systemd-networkd[1808]: lxc_health: Link DOWN Jan 17 12:18:27.899962 systemd-networkd[1808]: lxc_health: Lost carrier Jan 17 12:18:27.941419 systemd[1]: cri-containerd-7465f682d7d231c0d371dae93a055f46ec2c734514e11a3510e7eb8a31612c59.scope: Deactivated successfully. Jan 17 12:18:27.941921 systemd[1]: cri-containerd-7465f682d7d231c0d371dae93a055f46ec2c734514e11a3510e7eb8a31612c59.scope: Consumed 8.554s CPU time. Jan 17 12:18:27.975138 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7465f682d7d231c0d371dae93a055f46ec2c734514e11a3510e7eb8a31612c59-rootfs.mount: Deactivated successfully. Jan 17 12:18:28.065125 containerd[1955]: time="2025-01-17T12:18:27.993641929Z" level=info msg="shim disconnected" id=7465f682d7d231c0d371dae93a055f46ec2c734514e11a3510e7eb8a31612c59 namespace=k8s.io Jan 17 12:18:28.065125 containerd[1955]: time="2025-01-17T12:18:28.065046032Z" level=warning msg="cleaning up after shim disconnected" id=7465f682d7d231c0d371dae93a055f46ec2c734514e11a3510e7eb8a31612c59 namespace=k8s.io Jan 17 12:18:28.065125 containerd[1955]: time="2025-01-17T12:18:28.065066279Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:18:28.115800 containerd[1955]: time="2025-01-17T12:18:28.115752984Z" level=info msg="StopContainer for \"7465f682d7d231c0d371dae93a055f46ec2c734514e11a3510e7eb8a31612c59\" returns successfully" Jan 17 12:18:28.133979 containerd[1955]: time="2025-01-17T12:18:28.133874469Z" level=info msg="StopPodSandbox for \"98b9f5606a35d7727c15eedf56baa2670932ec598bdd2d1b405f4fd89aff9352\"" Jan 17 12:18:28.133979 containerd[1955]: time="2025-01-17T12:18:28.133938405Z" level=info msg="Container to stop \"7465f682d7d231c0d371dae93a055f46ec2c734514e11a3510e7eb8a31612c59\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:18:28.133979 containerd[1955]: time="2025-01-17T12:18:28.133952940Z" level=info msg="Container to stop \"8f5cc8d16c626859a06947c993a974ad8876cff62f1f1d19930ed1de1bdf91f7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:18:28.133979 containerd[1955]: time="2025-01-17T12:18:28.133964592Z" level=info msg="Container to stop \"ad06adfbe2a30d7b19a22ced5082426bf5a5083c7ae0a94b2eb4d43f8f39c905\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:18:28.133979 containerd[1955]: time="2025-01-17T12:18:28.133973510Z" level=info msg="Container to stop \"a3bd89b97b057a1fbb6d8cc348055af58903b3c747d14cc64cd501d9db999ff9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:18:28.133979 containerd[1955]: time="2025-01-17T12:18:28.133982564Z" level=info msg="Container to stop \"9c367f801ea9c462a4bf0e90832d223c4297147ede99cfa0740ad44ed97ac544\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:18:28.135820 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-98b9f5606a35d7727c15eedf56baa2670932ec598bdd2d1b405f4fd89aff9352-shm.mount: Deactivated successfully. Jan 17 12:18:28.171322 systemd[1]: cri-containerd-98b9f5606a35d7727c15eedf56baa2670932ec598bdd2d1b405f4fd89aff9352.scope: Deactivated successfully. Jan 17 12:18:28.232646 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98b9f5606a35d7727c15eedf56baa2670932ec598bdd2d1b405f4fd89aff9352-rootfs.mount: Deactivated successfully. Jan 17 12:18:28.252405 containerd[1955]: time="2025-01-17T12:18:28.251053088Z" level=info msg="shim disconnected" id=98b9f5606a35d7727c15eedf56baa2670932ec598bdd2d1b405f4fd89aff9352 namespace=k8s.io Jan 17 12:18:28.252405 containerd[1955]: time="2025-01-17T12:18:28.251534564Z" level=warning msg="cleaning up after shim disconnected" id=98b9f5606a35d7727c15eedf56baa2670932ec598bdd2d1b405f4fd89aff9352 namespace=k8s.io Jan 17 12:18:28.252405 containerd[1955]: time="2025-01-17T12:18:28.251558876Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:18:28.318029 containerd[1955]: time="2025-01-17T12:18:28.317887277Z" level=info msg="TearDown network for sandbox \"98b9f5606a35d7727c15eedf56baa2670932ec598bdd2d1b405f4fd89aff9352\" successfully" Jan 17 12:18:28.318029 containerd[1955]: time="2025-01-17T12:18:28.317935321Z" level=info msg="StopPodSandbox for \"98b9f5606a35d7727c15eedf56baa2670932ec598bdd2d1b405f4fd89aff9352\" returns successfully" Jan 17 12:18:28.422918 kubelet[2446]: I0117 12:18:28.422874 2446 scope.go:117] "RemoveContainer" containerID="7465f682d7d231c0d371dae93a055f46ec2c734514e11a3510e7eb8a31612c59" Jan 17 12:18:28.425711 containerd[1955]: time="2025-01-17T12:18:28.425667994Z" level=info msg="RemoveContainer for \"7465f682d7d231c0d371dae93a055f46ec2c734514e11a3510e7eb8a31612c59\"" Jan 17 12:18:28.444058 containerd[1955]: time="2025-01-17T12:18:28.443562423Z" level=info msg="RemoveContainer for \"7465f682d7d231c0d371dae93a055f46ec2c734514e11a3510e7eb8a31612c59\" returns successfully" Jan 17 12:18:28.447855 kubelet[2446]: I0117 12:18:28.443865 2446 scope.go:117] "RemoveContainer" containerID="9c367f801ea9c462a4bf0e90832d223c4297147ede99cfa0740ad44ed97ac544" Jan 17 12:18:28.454190 kubelet[2446]: I0117 12:18:28.454128 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-etc-cni-netd\") pod \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\" (UID: \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\") " Jan 17 12:18:28.454426 kubelet[2446]: I0117 12:18:28.454201 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-cilium-config-path\") pod \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\" (UID: \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\") " Jan 17 12:18:28.454426 kubelet[2446]: I0117 12:18:28.454231 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-cilium-run\") pod \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\" (UID: \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\") " Jan 17 12:18:28.454426 kubelet[2446]: I0117 12:18:28.454255 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-bpf-maps\") pod \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\" (UID: \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\") " Jan 17 12:18:28.454426 kubelet[2446]: I0117 12:18:28.454278 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-host-proc-sys-kernel\") pod \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\" (UID: \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\") " Jan 17 12:18:28.454426 kubelet[2446]: I0117 12:18:28.454309 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-clustermesh-secrets\") pod \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\" (UID: \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\") " Jan 17 12:18:28.454426 kubelet[2446]: I0117 12:18:28.454333 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-xtables-lock\") pod \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\" (UID: \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\") " Jan 17 12:18:28.456377 kubelet[2446]: I0117 12:18:28.454362 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64pqn\" (UniqueName: \"kubernetes.io/projected/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-kube-api-access-64pqn\") pod \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\" (UID: \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\") " Jan 17 12:18:28.456377 kubelet[2446]: I0117 12:18:28.454390 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-hubble-tls\") pod \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\" (UID: \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\") " Jan 17 12:18:28.456377 kubelet[2446]: I0117 12:18:28.454417 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-lib-modules\") pod \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\" (UID: \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\") " Jan 17 12:18:28.456377 kubelet[2446]: I0117 12:18:28.454445 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-hostproc\") pod \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\" (UID: \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\") " Jan 17 12:18:28.456377 kubelet[2446]: I0117 12:18:28.454473 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-cni-path\") pod \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\" (UID: \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\") " Jan 17 12:18:28.456377 kubelet[2446]: I0117 12:18:28.454501 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-cilium-cgroup\") pod \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\" (UID: \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\") " Jan 17 12:18:28.456636 kubelet[2446]: I0117 12:18:28.454529 2446 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-host-proc-sys-net\") pod \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\" (UID: \"7fce3788-01e9-4e7f-ad33-6fdeab5a6981\") " Jan 17 12:18:28.456636 kubelet[2446]: I0117 12:18:28.454611 2446 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7fce3788-01e9-4e7f-ad33-6fdeab5a6981" (UID: "7fce3788-01e9-4e7f-ad33-6fdeab5a6981"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:18:28.456725 containerd[1955]: time="2025-01-17T12:18:28.456353488Z" level=info msg="RemoveContainer for \"9c367f801ea9c462a4bf0e90832d223c4297147ede99cfa0740ad44ed97ac544\"" Jan 17 12:18:28.457318 kubelet[2446]: I0117 12:18:28.457273 2446 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7fce3788-01e9-4e7f-ad33-6fdeab5a6981" (UID: "7fce3788-01e9-4e7f-ad33-6fdeab5a6981"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:18:28.457415 kubelet[2446]: I0117 12:18:28.457323 2446 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7fce3788-01e9-4e7f-ad33-6fdeab5a6981" (UID: "7fce3788-01e9-4e7f-ad33-6fdeab5a6981"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:18:28.460747 kubelet[2446]: I0117 12:18:28.460293 2446 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7fce3788-01e9-4e7f-ad33-6fdeab5a6981" (UID: "7fce3788-01e9-4e7f-ad33-6fdeab5a6981"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:18:28.460747 kubelet[2446]: I0117 12:18:28.460367 2446 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7fce3788-01e9-4e7f-ad33-6fdeab5a6981" (UID: "7fce3788-01e9-4e7f-ad33-6fdeab5a6981"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:18:28.460747 kubelet[2446]: I0117 12:18:28.460409 2446 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7fce3788-01e9-4e7f-ad33-6fdeab5a6981" (UID: "7fce3788-01e9-4e7f-ad33-6fdeab5a6981"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:18:28.465025 containerd[1955]: time="2025-01-17T12:18:28.464666478Z" level=info msg="RemoveContainer for \"9c367f801ea9c462a4bf0e90832d223c4297147ede99cfa0740ad44ed97ac544\" returns successfully" Jan 17 12:18:28.470186 kubelet[2446]: I0117 12:18:28.468318 2446 scope.go:117] "RemoveContainer" containerID="a3bd89b97b057a1fbb6d8cc348055af58903b3c747d14cc64cd501d9db999ff9" Jan 17 12:18:28.473521 kubelet[2446]: I0117 12:18:28.471862 2446 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7fce3788-01e9-4e7f-ad33-6fdeab5a6981" (UID: "7fce3788-01e9-4e7f-ad33-6fdeab5a6981"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:18:28.473521 kubelet[2446]: I0117 12:18:28.471970 2446 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7fce3788-01e9-4e7f-ad33-6fdeab5a6981" (UID: "7fce3788-01e9-4e7f-ad33-6fdeab5a6981"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:18:28.473521 kubelet[2446]: I0117 12:18:28.472051 2446 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-hostproc" (OuterVolumeSpecName: "hostproc") pod "7fce3788-01e9-4e7f-ad33-6fdeab5a6981" (UID: "7fce3788-01e9-4e7f-ad33-6fdeab5a6981"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:18:28.473521 kubelet[2446]: I0117 12:18:28.472086 2446 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-cni-path" (OuterVolumeSpecName: "cni-path") pod "7fce3788-01e9-4e7f-ad33-6fdeab5a6981" (UID: "7fce3788-01e9-4e7f-ad33-6fdeab5a6981"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:18:28.473521 kubelet[2446]: I0117 12:18:28.472152 2446 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7fce3788-01e9-4e7f-ad33-6fdeab5a6981" (UID: "7fce3788-01e9-4e7f-ad33-6fdeab5a6981"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:18:28.476201 containerd[1955]: time="2025-01-17T12:18:28.475859954Z" level=info msg="RemoveContainer for \"a3bd89b97b057a1fbb6d8cc348055af58903b3c747d14cc64cd501d9db999ff9\"" Jan 17 12:18:28.478852 systemd[1]: var-lib-kubelet-pods-7fce3788\x2d01e9\x2d4e7f\x2dad33\x2d6fdeab5a6981-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 17 12:18:28.483630 systemd[1]: var-lib-kubelet-pods-7fce3788\x2d01e9\x2d4e7f\x2dad33\x2d6fdeab5a6981-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d64pqn.mount: Deactivated successfully. Jan 17 12:18:28.486650 containerd[1955]: time="2025-01-17T12:18:28.486612980Z" level=info msg="RemoveContainer for \"a3bd89b97b057a1fbb6d8cc348055af58903b3c747d14cc64cd501d9db999ff9\" returns successfully" Jan 17 12:18:28.490377 kubelet[2446]: I0117 12:18:28.487677 2446 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7fce3788-01e9-4e7f-ad33-6fdeab5a6981" (UID: "7fce3788-01e9-4e7f-ad33-6fdeab5a6981"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 12:18:28.491951 systemd[1]: var-lib-kubelet-pods-7fce3788\x2d01e9\x2d4e7f\x2dad33\x2d6fdeab5a6981-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 17 12:18:28.494474 kubelet[2446]: I0117 12:18:28.494245 2446 scope.go:117] "RemoveContainer" containerID="ad06adfbe2a30d7b19a22ced5082426bf5a5083c7ae0a94b2eb4d43f8f39c905" Jan 17 12:18:28.494991 kubelet[2446]: I0117 12:18:28.494961 2446 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-kube-api-access-64pqn" (OuterVolumeSpecName: "kube-api-access-64pqn") pod "7fce3788-01e9-4e7f-ad33-6fdeab5a6981" (UID: "7fce3788-01e9-4e7f-ad33-6fdeab5a6981"). InnerVolumeSpecName "kube-api-access-64pqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:18:28.496391 kubelet[2446]: I0117 12:18:28.496341 2446 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7fce3788-01e9-4e7f-ad33-6fdeab5a6981" (UID: "7fce3788-01e9-4e7f-ad33-6fdeab5a6981"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 17 12:18:28.497500 containerd[1955]: time="2025-01-17T12:18:28.497290192Z" level=info msg="RemoveContainer for \"ad06adfbe2a30d7b19a22ced5082426bf5a5083c7ae0a94b2eb4d43f8f39c905\"" Jan 17 12:18:28.514580 containerd[1955]: time="2025-01-17T12:18:28.513866602Z" level=info msg="RemoveContainer for \"ad06adfbe2a30d7b19a22ced5082426bf5a5083c7ae0a94b2eb4d43f8f39c905\" returns successfully" Jan 17 12:18:28.514721 kubelet[2446]: I0117 12:18:28.514165 2446 scope.go:117] "RemoveContainer" containerID="8f5cc8d16c626859a06947c993a974ad8876cff62f1f1d19930ed1de1bdf91f7" Jan 17 12:18:28.515835 containerd[1955]: time="2025-01-17T12:18:28.515804064Z" level=info msg="RemoveContainer for \"8f5cc8d16c626859a06947c993a974ad8876cff62f1f1d19930ed1de1bdf91f7\"" Jan 17 12:18:28.518964 containerd[1955]: time="2025-01-17T12:18:28.518922445Z" level=info msg="RemoveContainer for \"8f5cc8d16c626859a06947c993a974ad8876cff62f1f1d19930ed1de1bdf91f7\" returns successfully" Jan 17 12:18:28.519193 kubelet[2446]: I0117 12:18:28.519156 2446 scope.go:117] "RemoveContainer" containerID="7465f682d7d231c0d371dae93a055f46ec2c734514e11a3510e7eb8a31612c59" Jan 17 12:18:28.524003 containerd[1955]: time="2025-01-17T12:18:28.523935321Z" level=error msg="ContainerStatus for \"7465f682d7d231c0d371dae93a055f46ec2c734514e11a3510e7eb8a31612c59\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7465f682d7d231c0d371dae93a055f46ec2c734514e11a3510e7eb8a31612c59\": not found" Jan 17 12:18:28.534585 kubelet[2446]: E0117 12:18:28.534549 2446 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7465f682d7d231c0d371dae93a055f46ec2c734514e11a3510e7eb8a31612c59\": not found" containerID="7465f682d7d231c0d371dae93a055f46ec2c734514e11a3510e7eb8a31612c59" Jan 17 12:18:28.534908 kubelet[2446]: I0117 12:18:28.534727 2446 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7465f682d7d231c0d371dae93a055f46ec2c734514e11a3510e7eb8a31612c59"} err="failed to get container status \"7465f682d7d231c0d371dae93a055f46ec2c734514e11a3510e7eb8a31612c59\": rpc error: code = NotFound desc = an error occurred when try to find container \"7465f682d7d231c0d371dae93a055f46ec2c734514e11a3510e7eb8a31612c59\": not found" Jan 17 12:18:28.534908 kubelet[2446]: I0117 12:18:28.534754 2446 scope.go:117] "RemoveContainer" containerID="9c367f801ea9c462a4bf0e90832d223c4297147ede99cfa0740ad44ed97ac544" Jan 17 12:18:28.535380 containerd[1955]: time="2025-01-17T12:18:28.535219311Z" level=error msg="ContainerStatus for \"9c367f801ea9c462a4bf0e90832d223c4297147ede99cfa0740ad44ed97ac544\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9c367f801ea9c462a4bf0e90832d223c4297147ede99cfa0740ad44ed97ac544\": not found" Jan 17 12:18:28.535914 kubelet[2446]: E0117 12:18:28.535896 2446 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9c367f801ea9c462a4bf0e90832d223c4297147ede99cfa0740ad44ed97ac544\": not found" containerID="9c367f801ea9c462a4bf0e90832d223c4297147ede99cfa0740ad44ed97ac544" Jan 17 12:18:28.538187 kubelet[2446]: I0117 12:18:28.536105 2446 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9c367f801ea9c462a4bf0e90832d223c4297147ede99cfa0740ad44ed97ac544"} err="failed to get container status \"9c367f801ea9c462a4bf0e90832d223c4297147ede99cfa0740ad44ed97ac544\": rpc error: code = NotFound desc = an error occurred when try to find container \"9c367f801ea9c462a4bf0e90832d223c4297147ede99cfa0740ad44ed97ac544\": not found" Jan 17 12:18:28.538187 kubelet[2446]: I0117 12:18:28.536128 2446 scope.go:117] "RemoveContainer" containerID="a3bd89b97b057a1fbb6d8cc348055af58903b3c747d14cc64cd501d9db999ff9" Jan 17 12:18:28.549786 containerd[1955]: time="2025-01-17T12:18:28.549732114Z" level=error msg="ContainerStatus for \"a3bd89b97b057a1fbb6d8cc348055af58903b3c747d14cc64cd501d9db999ff9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a3bd89b97b057a1fbb6d8cc348055af58903b3c747d14cc64cd501d9db999ff9\": not found" Jan 17 12:18:28.550243 kubelet[2446]: E0117 12:18:28.550215 2446 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a3bd89b97b057a1fbb6d8cc348055af58903b3c747d14cc64cd501d9db999ff9\": not found" containerID="a3bd89b97b057a1fbb6d8cc348055af58903b3c747d14cc64cd501d9db999ff9" Jan 17 12:18:28.550384 kubelet[2446]: I0117 12:18:28.550270 2446 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a3bd89b97b057a1fbb6d8cc348055af58903b3c747d14cc64cd501d9db999ff9"} err="failed to get container status \"a3bd89b97b057a1fbb6d8cc348055af58903b3c747d14cc64cd501d9db999ff9\": rpc error: code = NotFound desc = an error occurred when try to find container \"a3bd89b97b057a1fbb6d8cc348055af58903b3c747d14cc64cd501d9db999ff9\": not found" Jan 17 12:18:28.550384 kubelet[2446]: I0117 12:18:28.550289 2446 scope.go:117] "RemoveContainer" containerID="ad06adfbe2a30d7b19a22ced5082426bf5a5083c7ae0a94b2eb4d43f8f39c905" Jan 17 12:18:28.550601 containerd[1955]: time="2025-01-17T12:18:28.550553060Z" level=error msg="ContainerStatus for \"ad06adfbe2a30d7b19a22ced5082426bf5a5083c7ae0a94b2eb4d43f8f39c905\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ad06adfbe2a30d7b19a22ced5082426bf5a5083c7ae0a94b2eb4d43f8f39c905\": not found" Jan 17 12:18:28.550922 kubelet[2446]: E0117 12:18:28.550795 2446 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ad06adfbe2a30d7b19a22ced5082426bf5a5083c7ae0a94b2eb4d43f8f39c905\": not found" containerID="ad06adfbe2a30d7b19a22ced5082426bf5a5083c7ae0a94b2eb4d43f8f39c905" Jan 17 12:18:28.550922 kubelet[2446]: I0117 12:18:28.550835 2446 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ad06adfbe2a30d7b19a22ced5082426bf5a5083c7ae0a94b2eb4d43f8f39c905"} err="failed to get container status \"ad06adfbe2a30d7b19a22ced5082426bf5a5083c7ae0a94b2eb4d43f8f39c905\": rpc error: code = NotFound desc = an error occurred when try to find container \"ad06adfbe2a30d7b19a22ced5082426bf5a5083c7ae0a94b2eb4d43f8f39c905\": not found" Jan 17 12:18:28.550922 kubelet[2446]: I0117 12:18:28.550853 2446 scope.go:117] "RemoveContainer" containerID="8f5cc8d16c626859a06947c993a974ad8876cff62f1f1d19930ed1de1bdf91f7" Jan 17 12:18:28.551209 containerd[1955]: time="2025-01-17T12:18:28.551157344Z" level=error msg="ContainerStatus for \"8f5cc8d16c626859a06947c993a974ad8876cff62f1f1d19930ed1de1bdf91f7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8f5cc8d16c626859a06947c993a974ad8876cff62f1f1d19930ed1de1bdf91f7\": not found" Jan 17 12:18:28.551361 kubelet[2446]: E0117 12:18:28.551317 2446 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8f5cc8d16c626859a06947c993a974ad8876cff62f1f1d19930ed1de1bdf91f7\": not found" containerID="8f5cc8d16c626859a06947c993a974ad8876cff62f1f1d19930ed1de1bdf91f7" Jan 17 12:18:28.551488 kubelet[2446]: I0117 12:18:28.551369 2446 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8f5cc8d16c626859a06947c993a974ad8876cff62f1f1d19930ed1de1bdf91f7"} err="failed to get container status \"8f5cc8d16c626859a06947c993a974ad8876cff62f1f1d19930ed1de1bdf91f7\": rpc error: code = NotFound desc = an error occurred when try to find container \"8f5cc8d16c626859a06947c993a974ad8876cff62f1f1d19930ed1de1bdf91f7\": not found" Jan 17 12:18:28.555682 kubelet[2446]: I0117 12:18:28.555633 2446 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-lib-modules\") on node \"172.31.21.110\" DevicePath \"\"" Jan 17 12:18:28.555682 kubelet[2446]: I0117 12:18:28.555665 2446 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-hostproc\") on node \"172.31.21.110\" DevicePath \"\"" Jan 17 12:18:28.555682 kubelet[2446]: I0117 12:18:28.555680 2446 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-cni-path\") on node \"172.31.21.110\" DevicePath \"\"" Jan 17 12:18:28.555682 kubelet[2446]: I0117 12:18:28.555693 2446 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-cilium-cgroup\") on node \"172.31.21.110\" DevicePath \"\"" Jan 17 12:18:28.555919 kubelet[2446]: I0117 12:18:28.555706 2446 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-host-proc-sys-net\") on node \"172.31.21.110\" DevicePath \"\"" Jan 17 12:18:28.555919 kubelet[2446]: I0117 12:18:28.555720 2446 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-etc-cni-netd\") on node \"172.31.21.110\" DevicePath \"\"" Jan 17 12:18:28.555919 kubelet[2446]: I0117 12:18:28.555744 2446 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-cilium-config-path\") on node \"172.31.21.110\" DevicePath \"\"" Jan 17 12:18:28.555919 kubelet[2446]: I0117 12:18:28.555755 2446 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-cilium-run\") on node \"172.31.21.110\" DevicePath \"\"" Jan 17 12:18:28.555919 kubelet[2446]: I0117 12:18:28.555767 2446 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-bpf-maps\") on node \"172.31.21.110\" DevicePath \"\"" Jan 17 12:18:28.555919 kubelet[2446]: I0117 12:18:28.555782 2446 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-host-proc-sys-kernel\") on node \"172.31.21.110\" DevicePath \"\"" Jan 17 12:18:28.555919 kubelet[2446]: I0117 12:18:28.555796 2446 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-clustermesh-secrets\") on node \"172.31.21.110\" DevicePath \"\"" Jan 17 12:18:28.555919 kubelet[2446]: I0117 12:18:28.555809 2446 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-xtables-lock\") on node \"172.31.21.110\" DevicePath \"\"" Jan 17 12:18:28.556132 kubelet[2446]: I0117 12:18:28.555827 2446 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-64pqn\" (UniqueName: \"kubernetes.io/projected/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-kube-api-access-64pqn\") on node \"172.31.21.110\" DevicePath \"\"" Jan 17 12:18:28.556132 kubelet[2446]: I0117 12:18:28.555841 2446 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7fce3788-01e9-4e7f-ad33-6fdeab5a6981-hubble-tls\") on node \"172.31.21.110\" DevicePath \"\"" Jan 17 12:18:28.724346 systemd[1]: Removed slice kubepods-burstable-pod7fce3788_01e9_4e7f_ad33_6fdeab5a6981.slice - libcontainer container kubepods-burstable-pod7fce3788_01e9_4e7f_ad33_6fdeab5a6981.slice. Jan 17 12:18:28.724493 systemd[1]: kubepods-burstable-pod7fce3788_01e9_4e7f_ad33_6fdeab5a6981.slice: Consumed 8.650s CPU time. Jan 17 12:18:28.875537 kubelet[2446]: E0117 12:18:28.875496 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:29.876557 kubelet[2446]: E0117 12:18:29.876443 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:29.978804 kubelet[2446]: I0117 12:18:29.978760 2446 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7fce3788-01e9-4e7f-ad33-6fdeab5a6981" path="/var/lib/kubelet/pods/7fce3788-01e9-4e7f-ad33-6fdeab5a6981/volumes" Jan 17 12:18:30.205901 ntpd[1936]: Deleting interface #10 lxc_health, fe80::600d:1aff:fee8:c112%7#123, interface stats: received=0, sent=0, dropped=0, active_time=43 secs Jan 17 12:18:30.206316 ntpd[1936]: 17 Jan 12:18:30 ntpd[1936]: Deleting interface #10 lxc_health, fe80::600d:1aff:fee8:c112%7#123, interface stats: received=0, sent=0, dropped=0, active_time=43 secs Jan 17 12:18:30.745334 kubelet[2446]: I0117 12:18:30.745244 2446 topology_manager.go:215] "Topology Admit Handler" podUID="a14b8ff2-03ff-4f0f-adde-a77da33ac007" podNamespace="kube-system" podName="cilium-operator-5cc964979-ftktj" Jan 17 12:18:30.745532 kubelet[2446]: E0117 12:18:30.745363 2446 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7fce3788-01e9-4e7f-ad33-6fdeab5a6981" containerName="mount-cgroup" Jan 17 12:18:30.745532 kubelet[2446]: E0117 12:18:30.745378 2446 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7fce3788-01e9-4e7f-ad33-6fdeab5a6981" containerName="apply-sysctl-overwrites" Jan 17 12:18:30.745532 kubelet[2446]: E0117 12:18:30.745388 2446 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7fce3788-01e9-4e7f-ad33-6fdeab5a6981" containerName="mount-bpf-fs" Jan 17 12:18:30.745532 kubelet[2446]: E0117 12:18:30.745397 2446 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7fce3788-01e9-4e7f-ad33-6fdeab5a6981" containerName="clean-cilium-state" Jan 17 12:18:30.745532 kubelet[2446]: E0117 12:18:30.745406 2446 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7fce3788-01e9-4e7f-ad33-6fdeab5a6981" containerName="cilium-agent" Jan 17 12:18:30.745532 kubelet[2446]: I0117 12:18:30.745429 2446 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fce3788-01e9-4e7f-ad33-6fdeab5a6981" containerName="cilium-agent" Jan 17 12:18:30.757329 systemd[1]: Created slice kubepods-besteffort-poda14b8ff2_03ff_4f0f_adde_a77da33ac007.slice - libcontainer container kubepods-besteffort-poda14b8ff2_03ff_4f0f_adde_a77da33ac007.slice. Jan 17 12:18:30.786806 kubelet[2446]: I0117 12:18:30.786765 2446 topology_manager.go:215] "Topology Admit Handler" podUID="9bfc2b5a-908c-4a8a-bad8-477cf6394488" podNamespace="kube-system" podName="cilium-9k65m" Jan 17 12:18:30.793519 systemd[1]: Created slice kubepods-burstable-pod9bfc2b5a_908c_4a8a_bad8_477cf6394488.slice - libcontainer container kubepods-burstable-pod9bfc2b5a_908c_4a8a_bad8_477cf6394488.slice. Jan 17 12:18:30.825222 kubelet[2446]: W0117 12:18:30.825121 2446 reflector.go:539] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:172.31.21.110" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.21.110' and this object Jan 17 12:18:30.825222 kubelet[2446]: W0117 12:18:30.825158 2446 reflector.go:539] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:172.31.21.110" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.21.110' and this object Jan 17 12:18:30.825222 kubelet[2446]: W0117 12:18:30.825121 2446 reflector.go:539] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:172.31.21.110" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.21.110' and this object Jan 17 12:18:30.825222 kubelet[2446]: E0117 12:18:30.825194 2446 reflector.go:147] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:172.31.21.110" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.21.110' and this object Jan 17 12:18:30.825222 kubelet[2446]: E0117 12:18:30.825194 2446 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:172.31.21.110" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.21.110' and this object Jan 17 12:18:30.825514 kubelet[2446]: E0117 12:18:30.825161 2446 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:172.31.21.110" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.21.110' and this object Jan 17 12:18:30.870745 kubelet[2446]: I0117 12:18:30.870690 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9bfc2b5a-908c-4a8a-bad8-477cf6394488-etc-cni-netd\") pod \"cilium-9k65m\" (UID: \"9bfc2b5a-908c-4a8a-bad8-477cf6394488\") " pod="kube-system/cilium-9k65m" Jan 17 12:18:30.870745 kubelet[2446]: I0117 12:18:30.870748 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9bfc2b5a-908c-4a8a-bad8-477cf6394488-xtables-lock\") pod \"cilium-9k65m\" (UID: \"9bfc2b5a-908c-4a8a-bad8-477cf6394488\") " pod="kube-system/cilium-9k65m" Jan 17 12:18:30.870745 kubelet[2446]: I0117 12:18:30.870784 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9bfc2b5a-908c-4a8a-bad8-477cf6394488-host-proc-sys-net\") pod \"cilium-9k65m\" (UID: \"9bfc2b5a-908c-4a8a-bad8-477cf6394488\") " pod="kube-system/cilium-9k65m" Jan 17 12:18:30.871145 kubelet[2446]: I0117 12:18:30.870814 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpsrd\" (UniqueName: \"kubernetes.io/projected/9bfc2b5a-908c-4a8a-bad8-477cf6394488-kube-api-access-xpsrd\") pod \"cilium-9k65m\" (UID: \"9bfc2b5a-908c-4a8a-bad8-477cf6394488\") " pod="kube-system/cilium-9k65m" Jan 17 12:18:30.871145 kubelet[2446]: I0117 12:18:30.870841 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9bfc2b5a-908c-4a8a-bad8-477cf6394488-cni-path\") pod \"cilium-9k65m\" (UID: \"9bfc2b5a-908c-4a8a-bad8-477cf6394488\") " pod="kube-system/cilium-9k65m" Jan 17 12:18:30.871145 kubelet[2446]: I0117 12:18:30.870897 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9bfc2b5a-908c-4a8a-bad8-477cf6394488-cilium-ipsec-secrets\") pod \"cilium-9k65m\" (UID: \"9bfc2b5a-908c-4a8a-bad8-477cf6394488\") " pod="kube-system/cilium-9k65m" Jan 17 12:18:30.871145 kubelet[2446]: I0117 12:18:30.870938 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9bfc2b5a-908c-4a8a-bad8-477cf6394488-lib-modules\") pod \"cilium-9k65m\" (UID: \"9bfc2b5a-908c-4a8a-bad8-477cf6394488\") " pod="kube-system/cilium-9k65m" Jan 17 12:18:30.871145 kubelet[2446]: I0117 12:18:30.870968 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9bfc2b5a-908c-4a8a-bad8-477cf6394488-hubble-tls\") pod \"cilium-9k65m\" (UID: \"9bfc2b5a-908c-4a8a-bad8-477cf6394488\") " pod="kube-system/cilium-9k65m" Jan 17 12:18:30.871145 kubelet[2446]: I0117 12:18:30.870994 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9bfc2b5a-908c-4a8a-bad8-477cf6394488-clustermesh-secrets\") pod \"cilium-9k65m\" (UID: \"9bfc2b5a-908c-4a8a-bad8-477cf6394488\") " pod="kube-system/cilium-9k65m" Jan 17 12:18:30.871429 kubelet[2446]: I0117 12:18:30.871022 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9bfc2b5a-908c-4a8a-bad8-477cf6394488-cilium-run\") pod \"cilium-9k65m\" (UID: \"9bfc2b5a-908c-4a8a-bad8-477cf6394488\") " pod="kube-system/cilium-9k65m" Jan 17 12:18:30.871429 kubelet[2446]: I0117 12:18:30.871053 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9bfc2b5a-908c-4a8a-bad8-477cf6394488-cilium-cgroup\") pod \"cilium-9k65m\" (UID: \"9bfc2b5a-908c-4a8a-bad8-477cf6394488\") " pod="kube-system/cilium-9k65m" Jan 17 12:18:30.871429 kubelet[2446]: I0117 12:18:30.871083 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj8gj\" (UniqueName: \"kubernetes.io/projected/a14b8ff2-03ff-4f0f-adde-a77da33ac007-kube-api-access-qj8gj\") pod \"cilium-operator-5cc964979-ftktj\" (UID: \"a14b8ff2-03ff-4f0f-adde-a77da33ac007\") " pod="kube-system/cilium-operator-5cc964979-ftktj" Jan 17 12:18:30.871429 kubelet[2446]: I0117 12:18:30.871110 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9bfc2b5a-908c-4a8a-bad8-477cf6394488-hostproc\") pod \"cilium-9k65m\" (UID: \"9bfc2b5a-908c-4a8a-bad8-477cf6394488\") " pod="kube-system/cilium-9k65m" Jan 17 12:18:30.871429 kubelet[2446]: I0117 12:18:30.871139 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9bfc2b5a-908c-4a8a-bad8-477cf6394488-cilium-config-path\") pod \"cilium-9k65m\" (UID: \"9bfc2b5a-908c-4a8a-bad8-477cf6394488\") " pod="kube-system/cilium-9k65m" Jan 17 12:18:30.871593 kubelet[2446]: I0117 12:18:30.871181 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9bfc2b5a-908c-4a8a-bad8-477cf6394488-host-proc-sys-kernel\") pod \"cilium-9k65m\" (UID: \"9bfc2b5a-908c-4a8a-bad8-477cf6394488\") " pod="kube-system/cilium-9k65m" Jan 17 12:18:30.871593 kubelet[2446]: I0117 12:18:30.871213 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9bfc2b5a-908c-4a8a-bad8-477cf6394488-bpf-maps\") pod \"cilium-9k65m\" (UID: \"9bfc2b5a-908c-4a8a-bad8-477cf6394488\") " pod="kube-system/cilium-9k65m" Jan 17 12:18:30.871593 kubelet[2446]: I0117 12:18:30.871250 2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a14b8ff2-03ff-4f0f-adde-a77da33ac007-cilium-config-path\") pod \"cilium-operator-5cc964979-ftktj\" (UID: \"a14b8ff2-03ff-4f0f-adde-a77da33ac007\") " pod="kube-system/cilium-operator-5cc964979-ftktj" Jan 17 12:18:30.877129 kubelet[2446]: E0117 12:18:30.877094 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:31.061124 containerd[1955]: time="2025-01-17T12:18:31.061002427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-ftktj,Uid:a14b8ff2-03ff-4f0f-adde-a77da33ac007,Namespace:kube-system,Attempt:0,}" Jan 17 12:18:31.103890 containerd[1955]: time="2025-01-17T12:18:31.103516282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:31.103890 containerd[1955]: time="2025-01-17T12:18:31.103662656Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:31.103890 containerd[1955]: time="2025-01-17T12:18:31.103684291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:31.103890 containerd[1955]: time="2025-01-17T12:18:31.103762155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:31.132376 systemd[1]: Started cri-containerd-268e2c4286e5810435903c00ac11523f37c25cfaaf6f9b7a90ab1c1817df1bbf.scope - libcontainer container 268e2c4286e5810435903c00ac11523f37c25cfaaf6f9b7a90ab1c1817df1bbf. Jan 17 12:18:31.186729 containerd[1955]: time="2025-01-17T12:18:31.186608700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-ftktj,Uid:a14b8ff2-03ff-4f0f-adde-a77da33ac007,Namespace:kube-system,Attempt:0,} returns sandbox id \"268e2c4286e5810435903c00ac11523f37c25cfaaf6f9b7a90ab1c1817df1bbf\"" Jan 17 12:18:31.189130 containerd[1955]: time="2025-01-17T12:18:31.188776190Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 17 12:18:31.877808 kubelet[2446]: E0117 12:18:31.877755 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:31.972394 kubelet[2446]: E0117 12:18:31.972295 2446 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 12:18:31.973940 kubelet[2446]: E0117 12:18:31.973906 2446 secret.go:194] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Jan 17 12:18:31.974238 kubelet[2446]: E0117 12:18:31.974218 2446 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bfc2b5a-908c-4a8a-bad8-477cf6394488-cilium-ipsec-secrets podName:9bfc2b5a-908c-4a8a-bad8-477cf6394488 nodeName:}" failed. No retries permitted until 2025-01-17 12:18:32.473981755 +0000 UTC m=+71.338348260 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/9bfc2b5a-908c-4a8a-bad8-477cf6394488-cilium-ipsec-secrets") pod "cilium-9k65m" (UID: "9bfc2b5a-908c-4a8a-bad8-477cf6394488") : failed to sync secret cache: timed out waiting for the condition Jan 17 12:18:31.979844 kubelet[2446]: E0117 12:18:31.979807 2446 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jan 17 12:18:31.979966 kubelet[2446]: E0117 12:18:31.979896 2446 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bfc2b5a-908c-4a8a-bad8-477cf6394488-clustermesh-secrets podName:9bfc2b5a-908c-4a8a-bad8-477cf6394488 nodeName:}" failed. No retries permitted until 2025-01-17 12:18:32.47987444 +0000 UTC m=+71.344240936 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/9bfc2b5a-908c-4a8a-bad8-477cf6394488-clustermesh-secrets") pod "cilium-9k65m" (UID: "9bfc2b5a-908c-4a8a-bad8-477cf6394488") : failed to sync secret cache: timed out waiting for the condition Jan 17 12:18:31.982068 kubelet[2446]: E0117 12:18:31.982025 2446 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jan 17 12:18:31.982068 kubelet[2446]: E0117 12:18:31.982061 2446 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-9k65m: failed to sync secret cache: timed out waiting for the condition Jan 17 12:18:31.982244 kubelet[2446]: E0117 12:18:31.982140 2446 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9bfc2b5a-908c-4a8a-bad8-477cf6394488-hubble-tls podName:9bfc2b5a-908c-4a8a-bad8-477cf6394488 nodeName:}" failed. No retries permitted until 2025-01-17 12:18:32.482119242 +0000 UTC m=+71.346485749 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/9bfc2b5a-908c-4a8a-bad8-477cf6394488-hubble-tls") pod "cilium-9k65m" (UID: "9bfc2b5a-908c-4a8a-bad8-477cf6394488") : failed to sync secret cache: timed out waiting for the condition Jan 17 12:18:32.613920 containerd[1955]: time="2025-01-17T12:18:32.613802853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9k65m,Uid:9bfc2b5a-908c-4a8a-bad8-477cf6394488,Namespace:kube-system,Attempt:0,}" Jan 17 12:18:32.705331 containerd[1955]: time="2025-01-17T12:18:32.689980223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:18:32.705331 containerd[1955]: time="2025-01-17T12:18:32.690052628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:18:32.705331 containerd[1955]: time="2025-01-17T12:18:32.690076110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:32.705331 containerd[1955]: time="2025-01-17T12:18:32.690300264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:18:32.761731 systemd[1]: Started cri-containerd-6b68cf86d67030ddbb5eb73b54606103839e44c8447e2e10bb1fd80074d8d96a.scope - libcontainer container 6b68cf86d67030ddbb5eb73b54606103839e44c8447e2e10bb1fd80074d8d96a. Jan 17 12:18:32.800928 containerd[1955]: time="2025-01-17T12:18:32.800865803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9k65m,Uid:9bfc2b5a-908c-4a8a-bad8-477cf6394488,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b68cf86d67030ddbb5eb73b54606103839e44c8447e2e10bb1fd80074d8d96a\"" Jan 17 12:18:32.810987 containerd[1955]: time="2025-01-17T12:18:32.810870018Z" level=info msg="CreateContainer within sandbox \"6b68cf86d67030ddbb5eb73b54606103839e44c8447e2e10bb1fd80074d8d96a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 12:18:32.841469 containerd[1955]: time="2025-01-17T12:18:32.841423361Z" level=info msg="CreateContainer within sandbox \"6b68cf86d67030ddbb5eb73b54606103839e44c8447e2e10bb1fd80074d8d96a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3295a12758891c963761ca2a2782dda54752fc32d179fc2011ccd43f39030ace\"" Jan 17 12:18:32.842132 containerd[1955]: time="2025-01-17T12:18:32.842025555Z" level=info msg="StartContainer for \"3295a12758891c963761ca2a2782dda54752fc32d179fc2011ccd43f39030ace\"" Jan 17 12:18:32.878639 kubelet[2446]: E0117 12:18:32.877960 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:32.894946 systemd[1]: Started cri-containerd-3295a12758891c963761ca2a2782dda54752fc32d179fc2011ccd43f39030ace.scope - libcontainer container 3295a12758891c963761ca2a2782dda54752fc32d179fc2011ccd43f39030ace. Jan 17 12:18:32.944744 containerd[1955]: time="2025-01-17T12:18:32.944621929Z" level=info msg="StartContainer for \"3295a12758891c963761ca2a2782dda54752fc32d179fc2011ccd43f39030ace\" returns successfully" Jan 17 12:18:33.175332 systemd[1]: cri-containerd-3295a12758891c963761ca2a2782dda54752fc32d179fc2011ccd43f39030ace.scope: Deactivated successfully. Jan 17 12:18:33.333377 containerd[1955]: time="2025-01-17T12:18:33.333313034Z" level=info msg="shim disconnected" id=3295a12758891c963761ca2a2782dda54752fc32d179fc2011ccd43f39030ace namespace=k8s.io Jan 17 12:18:33.334327 containerd[1955]: time="2025-01-17T12:18:33.333616380Z" level=warning msg="cleaning up after shim disconnected" id=3295a12758891c963761ca2a2782dda54752fc32d179fc2011ccd43f39030ace namespace=k8s.io Jan 17 12:18:33.334327 containerd[1955]: time="2025-01-17T12:18:33.333636028Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:18:33.377381 containerd[1955]: time="2025-01-17T12:18:33.377326763Z" level=warning msg="cleanup warnings time=\"2025-01-17T12:18:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 12:18:33.439878 containerd[1955]: time="2025-01-17T12:18:33.438899275Z" level=info msg="CreateContainer within sandbox \"6b68cf86d67030ddbb5eb73b54606103839e44c8447e2e10bb1fd80074d8d96a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 12:18:33.472241 containerd[1955]: time="2025-01-17T12:18:33.472191320Z" level=info msg="CreateContainer within sandbox \"6b68cf86d67030ddbb5eb73b54606103839e44c8447e2e10bb1fd80074d8d96a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6d813fd9674687b84d3de495e90c47c183497be28985d7c7cbafbdb330e68a12\"" Jan 17 12:18:33.474216 containerd[1955]: time="2025-01-17T12:18:33.474079803Z" level=info msg="StartContainer for \"6d813fd9674687b84d3de495e90c47c183497be28985d7c7cbafbdb330e68a12\"" Jan 17 12:18:33.555457 systemd[1]: run-containerd-runc-k8s.io-6d813fd9674687b84d3de495e90c47c183497be28985d7c7cbafbdb330e68a12-runc.8hQeOb.mount: Deactivated successfully. Jan 17 12:18:33.568651 systemd[1]: Started cri-containerd-6d813fd9674687b84d3de495e90c47c183497be28985d7c7cbafbdb330e68a12.scope - libcontainer container 6d813fd9674687b84d3de495e90c47c183497be28985d7c7cbafbdb330e68a12. Jan 17 12:18:33.661431 containerd[1955]: time="2025-01-17T12:18:33.661386576Z" level=info msg="StartContainer for \"6d813fd9674687b84d3de495e90c47c183497be28985d7c7cbafbdb330e68a12\" returns successfully" Jan 17 12:18:33.763477 kubelet[2446]: I0117 12:18:33.762555 2446 setters.go:568] "Node became not ready" node="172.31.21.110" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-17T12:18:33Z","lastTransitionTime":"2025-01-17T12:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 17 12:18:33.783397 systemd[1]: cri-containerd-6d813fd9674687b84d3de495e90c47c183497be28985d7c7cbafbdb330e68a12.scope: Deactivated successfully. Jan 17 12:18:33.789274 containerd[1955]: time="2025-01-17T12:18:33.788939861Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:33.791626 containerd[1955]: time="2025-01-17T12:18:33.791457555Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907237" Jan 17 12:18:33.795336 containerd[1955]: time="2025-01-17T12:18:33.793162515Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:33.795793 containerd[1955]: time="2025-01-17T12:18:33.795713042Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.606801266s" Jan 17 12:18:33.795944 containerd[1955]: time="2025-01-17T12:18:33.795920925Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 17 12:18:33.798610 containerd[1955]: time="2025-01-17T12:18:33.798576222Z" level=info msg="CreateContainer within sandbox \"268e2c4286e5810435903c00ac11523f37c25cfaaf6f9b7a90ab1c1817df1bbf\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 17 12:18:33.819753 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d813fd9674687b84d3de495e90c47c183497be28985d7c7cbafbdb330e68a12-rootfs.mount: Deactivated successfully. Jan 17 12:18:33.833026 containerd[1955]: time="2025-01-17T12:18:33.832912357Z" level=info msg="shim disconnected" id=6d813fd9674687b84d3de495e90c47c183497be28985d7c7cbafbdb330e68a12 namespace=k8s.io Jan 17 12:18:33.833026 containerd[1955]: time="2025-01-17T12:18:33.833020694Z" level=warning msg="cleaning up after shim disconnected" id=6d813fd9674687b84d3de495e90c47c183497be28985d7c7cbafbdb330e68a12 namespace=k8s.io Jan 17 12:18:33.833026 containerd[1955]: time="2025-01-17T12:18:33.833033280Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:18:33.834238 containerd[1955]: time="2025-01-17T12:18:33.833916614Z" level=info msg="CreateContainer within sandbox \"268e2c4286e5810435903c00ac11523f37c25cfaaf6f9b7a90ab1c1817df1bbf\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3f5e752858c2d4a70af9b0e518d25250af4ac8760b6b9f37dd72941dd7c0bece\"" Jan 17 12:18:33.835424 containerd[1955]: time="2025-01-17T12:18:33.835136284Z" level=info msg="StartContainer for \"3f5e752858c2d4a70af9b0e518d25250af4ac8760b6b9f37dd72941dd7c0bece\"" Jan 17 12:18:33.878629 kubelet[2446]: E0117 12:18:33.878595 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:33.883368 systemd[1]: Started cri-containerd-3f5e752858c2d4a70af9b0e518d25250af4ac8760b6b9f37dd72941dd7c0bece.scope - libcontainer container 3f5e752858c2d4a70af9b0e518d25250af4ac8760b6b9f37dd72941dd7c0bece. Jan 17 12:18:33.918651 containerd[1955]: time="2025-01-17T12:18:33.918372814Z" level=info msg="StartContainer for \"3f5e752858c2d4a70af9b0e518d25250af4ac8760b6b9f37dd72941dd7c0bece\" returns successfully" Jan 17 12:18:34.445301 containerd[1955]: time="2025-01-17T12:18:34.445262240Z" level=info msg="CreateContainer within sandbox \"6b68cf86d67030ddbb5eb73b54606103839e44c8447e2e10bb1fd80074d8d96a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 12:18:34.461877 containerd[1955]: time="2025-01-17T12:18:34.461827738Z" level=info msg="CreateContainer within sandbox \"6b68cf86d67030ddbb5eb73b54606103839e44c8447e2e10bb1fd80074d8d96a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"802317e0e9d6e038a754ab2001b39f124b9a1605fbd5ebd00070e78df67c5894\"" Jan 17 12:18:34.462514 containerd[1955]: time="2025-01-17T12:18:34.462478042Z" level=info msg="StartContainer for \"802317e0e9d6e038a754ab2001b39f124b9a1605fbd5ebd00070e78df67c5894\"" Jan 17 12:18:34.480202 kubelet[2446]: I0117 12:18:34.479902 2446 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-ftktj" podStartSLOduration=1.871600474 podStartE2EDuration="4.479761198s" podCreationTimestamp="2025-01-17 12:18:30 +0000 UTC" firstStartedPulling="2025-01-17 12:18:31.188220245 +0000 UTC m=+70.052586751" lastFinishedPulling="2025-01-17 12:18:33.796380968 +0000 UTC m=+72.660747475" observedRunningTime="2025-01-17 12:18:34.454240781 +0000 UTC m=+73.318607291" watchObservedRunningTime="2025-01-17 12:18:34.479761198 +0000 UTC m=+73.344127710" Jan 17 12:18:34.511472 systemd[1]: Started cri-containerd-802317e0e9d6e038a754ab2001b39f124b9a1605fbd5ebd00070e78df67c5894.scope - libcontainer container 802317e0e9d6e038a754ab2001b39f124b9a1605fbd5ebd00070e78df67c5894. Jan 17 12:18:34.622712 containerd[1955]: time="2025-01-17T12:18:34.622661862Z" level=info msg="StartContainer for \"802317e0e9d6e038a754ab2001b39f124b9a1605fbd5ebd00070e78df67c5894\" returns successfully" Jan 17 12:18:34.822767 systemd[1]: cri-containerd-802317e0e9d6e038a754ab2001b39f124b9a1605fbd5ebd00070e78df67c5894.scope: Deactivated successfully. Jan 17 12:18:34.873772 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-802317e0e9d6e038a754ab2001b39f124b9a1605fbd5ebd00070e78df67c5894-rootfs.mount: Deactivated successfully. Jan 17 12:18:34.885431 kubelet[2446]: E0117 12:18:34.882637 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:34.895318 containerd[1955]: time="2025-01-17T12:18:34.895225448Z" level=info msg="shim disconnected" id=802317e0e9d6e038a754ab2001b39f124b9a1605fbd5ebd00070e78df67c5894 namespace=k8s.io Jan 17 12:18:34.895318 containerd[1955]: time="2025-01-17T12:18:34.895313167Z" level=warning msg="cleaning up after shim disconnected" id=802317e0e9d6e038a754ab2001b39f124b9a1605fbd5ebd00070e78df67c5894 namespace=k8s.io Jan 17 12:18:34.895318 containerd[1955]: time="2025-01-17T12:18:34.895326003Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:18:35.449847 containerd[1955]: time="2025-01-17T12:18:35.449716225Z" level=info msg="CreateContainer within sandbox \"6b68cf86d67030ddbb5eb73b54606103839e44c8447e2e10bb1fd80074d8d96a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 12:18:35.466612 containerd[1955]: time="2025-01-17T12:18:35.466561851Z" level=info msg="CreateContainer within sandbox \"6b68cf86d67030ddbb5eb73b54606103839e44c8447e2e10bb1fd80074d8d96a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1988ad3b39e543f20113a1cea21c307b4a2662a8fc944f4537e7b6f2204e3cb5\"" Jan 17 12:18:35.468295 containerd[1955]: time="2025-01-17T12:18:35.467094699Z" level=info msg="StartContainer for \"1988ad3b39e543f20113a1cea21c307b4a2662a8fc944f4537e7b6f2204e3cb5\"" Jan 17 12:18:35.550322 systemd[1]: run-containerd-runc-k8s.io-1988ad3b39e543f20113a1cea21c307b4a2662a8fc944f4537e7b6f2204e3cb5-runc.nBr1s5.mount: Deactivated successfully. Jan 17 12:18:35.560441 systemd[1]: Started cri-containerd-1988ad3b39e543f20113a1cea21c307b4a2662a8fc944f4537e7b6f2204e3cb5.scope - libcontainer container 1988ad3b39e543f20113a1cea21c307b4a2662a8fc944f4537e7b6f2204e3cb5. Jan 17 12:18:35.591764 containerd[1955]: time="2025-01-17T12:18:35.591588311Z" level=info msg="StartContainer for \"1988ad3b39e543f20113a1cea21c307b4a2662a8fc944f4537e7b6f2204e3cb5\" returns successfully" Jan 17 12:18:35.600750 systemd[1]: cri-containerd-1988ad3b39e543f20113a1cea21c307b4a2662a8fc944f4537e7b6f2204e3cb5.scope: Deactivated successfully. Jan 17 12:18:35.630929 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1988ad3b39e543f20113a1cea21c307b4a2662a8fc944f4537e7b6f2204e3cb5-rootfs.mount: Deactivated successfully. Jan 17 12:18:35.640009 containerd[1955]: time="2025-01-17T12:18:35.639938333Z" level=info msg="shim disconnected" id=1988ad3b39e543f20113a1cea21c307b4a2662a8fc944f4537e7b6f2204e3cb5 namespace=k8s.io Jan 17 12:18:35.640009 containerd[1955]: time="2025-01-17T12:18:35.640006504Z" level=warning msg="cleaning up after shim disconnected" id=1988ad3b39e543f20113a1cea21c307b4a2662a8fc944f4537e7b6f2204e3cb5 namespace=k8s.io Jan 17 12:18:35.640530 containerd[1955]: time="2025-01-17T12:18:35.640019728Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:18:35.884507 kubelet[2446]: E0117 12:18:35.884459 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:36.454249 containerd[1955]: time="2025-01-17T12:18:36.454209385Z" level=info msg="CreateContainer within sandbox \"6b68cf86d67030ddbb5eb73b54606103839e44c8447e2e10bb1fd80074d8d96a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 12:18:36.471610 containerd[1955]: time="2025-01-17T12:18:36.471566701Z" level=info msg="CreateContainer within sandbox \"6b68cf86d67030ddbb5eb73b54606103839e44c8447e2e10bb1fd80074d8d96a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0d86defd22fbcb8ff1e7236f02723569ff30d429f95859b981c0bbaac0a8c620\"" Jan 17 12:18:36.472440 containerd[1955]: time="2025-01-17T12:18:36.472405409Z" level=info msg="StartContainer for \"0d86defd22fbcb8ff1e7236f02723569ff30d429f95859b981c0bbaac0a8c620\"" Jan 17 12:18:36.511617 systemd[1]: Started cri-containerd-0d86defd22fbcb8ff1e7236f02723569ff30d429f95859b981c0bbaac0a8c620.scope - libcontainer container 0d86defd22fbcb8ff1e7236f02723569ff30d429f95859b981c0bbaac0a8c620. Jan 17 12:18:36.566601 containerd[1955]: time="2025-01-17T12:18:36.566552963Z" level=info msg="StartContainer for \"0d86defd22fbcb8ff1e7236f02723569ff30d429f95859b981c0bbaac0a8c620\" returns successfully" Jan 17 12:18:36.885637 kubelet[2446]: E0117 12:18:36.885582 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:36.976130 kubelet[2446]: E0117 12:18:36.976086 2446 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 12:18:37.886700 kubelet[2446]: E0117 12:18:37.886643 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:38.485526 kubelet[2446]: I0117 12:18:38.485484 2446 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-9k65m" podStartSLOduration=8.485445366 podStartE2EDuration="8.485445366s" podCreationTimestamp="2025-01-17 12:18:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:18:38.484633619 +0000 UTC m=+77.349000134" watchObservedRunningTime="2025-01-17 12:18:38.485445366 +0000 UTC m=+77.349811909" Jan 17 12:18:38.887592 kubelet[2446]: E0117 12:18:38.887540 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:39.514213 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 17 12:18:39.888538 kubelet[2446]: E0117 12:18:39.888448 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:40.889446 kubelet[2446]: E0117 12:18:40.889410 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:41.814550 kubelet[2446]: E0117 12:18:41.813164 2446 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:41.889599 kubelet[2446]: E0117 12:18:41.889561 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:42.890840 kubelet[2446]: E0117 12:18:42.890561 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:43.327400 systemd[1]: run-containerd-runc-k8s.io-0d86defd22fbcb8ff1e7236f02723569ff30d429f95859b981c0bbaac0a8c620-runc.duqZr6.mount: Deactivated successfully. Jan 17 12:18:43.665904 (udev-worker)[5251]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:18:43.677890 systemd-networkd[1808]: lxc_health: Link UP Jan 17 12:18:43.679769 systemd-networkd[1808]: lxc_health: Gained carrier Jan 17 12:18:43.891392 kubelet[2446]: E0117 12:18:43.891347 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:44.893192 kubelet[2446]: E0117 12:18:44.892422 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:45.394334 systemd-networkd[1808]: lxc_health: Gained IPv6LL Jan 17 12:18:45.614815 systemd[1]: run-containerd-runc-k8s.io-0d86defd22fbcb8ff1e7236f02723569ff30d429f95859b981c0bbaac0a8c620-runc.1NAWeZ.mount: Deactivated successfully. Jan 17 12:18:45.895533 kubelet[2446]: E0117 12:18:45.895458 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:46.895756 kubelet[2446]: E0117 12:18:46.895680 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:47.886783 systemd[1]: run-containerd-runc-k8s.io-0d86defd22fbcb8ff1e7236f02723569ff30d429f95859b981c0bbaac0a8c620-runc.HPON2q.mount: Deactivated successfully. Jan 17 12:18:47.896307 kubelet[2446]: E0117 12:18:47.896250 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:48.206202 ntpd[1936]: Listen normally on 14 lxc_health [fe80::84a2:c6ff:fe5f:a1f%15]:123 Jan 17 12:18:48.207046 ntpd[1936]: 17 Jan 12:18:48 ntpd[1936]: Listen normally on 14 lxc_health [fe80::84a2:c6ff:fe5f:a1f%15]:123 Jan 17 12:18:48.896655 kubelet[2446]: E0117 12:18:48.896563 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:49.898194 kubelet[2446]: E0117 12:18:49.897702 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:50.898247 kubelet[2446]: E0117 12:18:50.898190 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:51.898796 kubelet[2446]: E0117 12:18:51.898745 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:52.899084 kubelet[2446]: E0117 12:18:52.899028 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:53.900101 kubelet[2446]: E0117 12:18:53.900047 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:54.910961 kubelet[2446]: E0117 12:18:54.900762 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:55.911777 kubelet[2446]: E0117 12:18:55.911720 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:56.912907 kubelet[2446]: E0117 12:18:56.912854 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:57.914034 kubelet[2446]: E0117 12:18:57.913990 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:58.914212 kubelet[2446]: E0117 12:18:58.914143 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:18:59.914691 kubelet[2446]: E0117 12:18:59.914635 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:00.915086 kubelet[2446]: E0117 12:19:00.915027 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:01.814254 kubelet[2446]: E0117 12:19:01.814200 2446 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:01.915906 kubelet[2446]: E0117 12:19:01.915850 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:02.916600 kubelet[2446]: E0117 12:19:02.916544 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:03.917342 kubelet[2446]: E0117 12:19:03.917289 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:04.028227 kubelet[2446]: E0117 12:19:04.028181 2446 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.246:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.110?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 17 12:19:04.118345 kubelet[2446]: E0117 12:19:04.118300 2446 kubelet_node_status.go:544] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-01-17T12:18:54Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-01-17T12:18:54Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-01-17T12:18:54Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-01-17T12:18:54Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"],\\\"sizeBytes\\\":166719855},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":91036984},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":71035896},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\\\",\\\"registry.k8s.io/kube-proxy:v1.29.13\\\"],\\\"sizeBytes\\\":28619960},{\\\"names\\\":[\\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"],\\\"sizeBytes\\\":18897442},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":311286}]}}\" for node \"172.31.21.110\": Patch \"https://172.31.24.246:6443/api/v1/nodes/172.31.21.110/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 17 12:19:04.151887 systemd[1]: cri-containerd-3f5e752858c2d4a70af9b0e518d25250af4ac8760b6b9f37dd72941dd7c0bece.scope: Deactivated successfully. Jan 17 12:19:04.176775 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f5e752858c2d4a70af9b0e518d25250af4ac8760b6b9f37dd72941dd7c0bece-rootfs.mount: Deactivated successfully. Jan 17 12:19:04.198519 containerd[1955]: time="2025-01-17T12:19:04.198153074Z" level=info msg="shim disconnected" id=3f5e752858c2d4a70af9b0e518d25250af4ac8760b6b9f37dd72941dd7c0bece namespace=k8s.io Jan 17 12:19:04.198519 containerd[1955]: time="2025-01-17T12:19:04.198231512Z" level=warning msg="cleaning up after shim disconnected" id=3f5e752858c2d4a70af9b0e518d25250af4ac8760b6b9f37dd72941dd7c0bece namespace=k8s.io Jan 17 12:19:04.198519 containerd[1955]: time="2025-01-17T12:19:04.198243811Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:19:04.559401 kubelet[2446]: I0117 12:19:04.553750 2446 scope.go:117] "RemoveContainer" containerID="3f5e752858c2d4a70af9b0e518d25250af4ac8760b6b9f37dd72941dd7c0bece" Jan 17 12:19:04.567728 containerd[1955]: time="2025-01-17T12:19:04.567680876Z" level=info msg="CreateContainer within sandbox \"268e2c4286e5810435903c00ac11523f37c25cfaaf6f9b7a90ab1c1817df1bbf\" for container &ContainerMetadata{Name:cilium-operator,Attempt:1,}" Jan 17 12:19:04.609007 containerd[1955]: time="2025-01-17T12:19:04.608949649Z" level=info msg="CreateContainer within sandbox \"268e2c4286e5810435903c00ac11523f37c25cfaaf6f9b7a90ab1c1817df1bbf\" for &ContainerMetadata{Name:cilium-operator,Attempt:1,} returns container id \"036781bc38861f255b29ef96bc48098860e2e4b40d911f13c5b15d14c37fcb5d\"" Jan 17 12:19:04.618003 containerd[1955]: time="2025-01-17T12:19:04.610788664Z" level=info msg="StartContainer for \"036781bc38861f255b29ef96bc48098860e2e4b40d911f13c5b15d14c37fcb5d\"" Jan 17 12:19:04.708576 systemd[1]: Started cri-containerd-036781bc38861f255b29ef96bc48098860e2e4b40d911f13c5b15d14c37fcb5d.scope - libcontainer container 036781bc38861f255b29ef96bc48098860e2e4b40d911f13c5b15d14c37fcb5d. Jan 17 12:19:04.759902 containerd[1955]: time="2025-01-17T12:19:04.759841402Z" level=info msg="StartContainer for \"036781bc38861f255b29ef96bc48098860e2e4b40d911f13c5b15d14c37fcb5d\" returns successfully" Jan 17 12:19:04.917729 kubelet[2446]: E0117 12:19:04.917687 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:05.917957 kubelet[2446]: E0117 12:19:05.917904 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:06.919066 kubelet[2446]: E0117 12:19:06.919014 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:07.919459 kubelet[2446]: E0117 12:19:07.919405 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:08.920185 kubelet[2446]: E0117 12:19:08.920127 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:09.920849 kubelet[2446]: E0117 12:19:09.920797 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:10.921300 kubelet[2446]: E0117 12:19:10.921239 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:11.921613 kubelet[2446]: E0117 12:19:11.921562 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:12.922756 kubelet[2446]: E0117 12:19:12.922704 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:13.922917 kubelet[2446]: E0117 12:19:13.922861 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:14.029460 kubelet[2446]: E0117 12:19:14.029338 2446 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.246:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.110?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 17 12:19:14.118814 kubelet[2446]: E0117 12:19:14.118771 2446 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.21.110\": Get \"https://172.31.24.246:6443/api/v1/nodes/172.31.21.110?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 17 12:19:14.923468 kubelet[2446]: E0117 12:19:14.923413 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:15.924134 kubelet[2446]: E0117 12:19:15.924078 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:16.924388 kubelet[2446]: E0117 12:19:16.924332 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:17.924751 kubelet[2446]: E0117 12:19:17.924695 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:18.925645 kubelet[2446]: E0117 12:19:18.925591 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:19.925899 kubelet[2446]: E0117 12:19:19.925856 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:20.926506 kubelet[2446]: E0117 12:19:20.926463 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:21.814059 kubelet[2446]: E0117 12:19:21.814010 2446 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:21.856826 containerd[1955]: time="2025-01-17T12:19:21.856786324Z" level=info msg="StopPodSandbox for \"98b9f5606a35d7727c15eedf56baa2670932ec598bdd2d1b405f4fd89aff9352\"" Jan 17 12:19:21.857257 containerd[1955]: time="2025-01-17T12:19:21.856885397Z" level=info msg="TearDown network for sandbox \"98b9f5606a35d7727c15eedf56baa2670932ec598bdd2d1b405f4fd89aff9352\" successfully" Jan 17 12:19:21.857257 containerd[1955]: time="2025-01-17T12:19:21.856900718Z" level=info msg="StopPodSandbox for \"98b9f5606a35d7727c15eedf56baa2670932ec598bdd2d1b405f4fd89aff9352\" returns successfully" Jan 17 12:19:21.857859 containerd[1955]: time="2025-01-17T12:19:21.857822271Z" level=info msg="RemovePodSandbox for \"98b9f5606a35d7727c15eedf56baa2670932ec598bdd2d1b405f4fd89aff9352\"" Jan 17 12:19:21.857859 containerd[1955]: time="2025-01-17T12:19:21.857853361Z" level=info msg="Forcibly stopping sandbox \"98b9f5606a35d7727c15eedf56baa2670932ec598bdd2d1b405f4fd89aff9352\"" Jan 17 12:19:21.857992 containerd[1955]: time="2025-01-17T12:19:21.857924976Z" level=info msg="TearDown network for sandbox \"98b9f5606a35d7727c15eedf56baa2670932ec598bdd2d1b405f4fd89aff9352\" successfully" Jan 17 12:19:21.867428 containerd[1955]: time="2025-01-17T12:19:21.867371783Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"98b9f5606a35d7727c15eedf56baa2670932ec598bdd2d1b405f4fd89aff9352\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:19:21.867585 containerd[1955]: time="2025-01-17T12:19:21.867446866Z" level=info msg="RemovePodSandbox \"98b9f5606a35d7727c15eedf56baa2670932ec598bdd2d1b405f4fd89aff9352\" returns successfully" Jan 17 12:19:21.927567 kubelet[2446]: E0117 12:19:21.927515 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:22.928131 kubelet[2446]: E0117 12:19:22.928058 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:23.928464 kubelet[2446]: E0117 12:19:23.928415 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:24.030368 kubelet[2446]: E0117 12:19:24.030324 2446 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.246:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.110?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 17 12:19:24.119486 kubelet[2446]: E0117 12:19:24.119436 2446 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.21.110\": Get \"https://172.31.24.246:6443/api/v1/nodes/172.31.21.110?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 17 12:19:24.928718 kubelet[2446]: E0117 12:19:24.928660 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:25.929475 kubelet[2446]: E0117 12:19:25.929425 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:26.929647 kubelet[2446]: E0117 12:19:26.929576 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:27.930326 kubelet[2446]: E0117 12:19:27.930271 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:28.931338 kubelet[2446]: E0117 12:19:28.931288 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:29.932665 kubelet[2446]: E0117 12:19:29.932525 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:30.933709 kubelet[2446]: E0117 12:19:30.933661 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:31.934644 kubelet[2446]: E0117 12:19:31.934581 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:32.935014 kubelet[2446]: E0117 12:19:32.934958 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:33.935697 kubelet[2446]: E0117 12:19:33.935642 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:34.030666 kubelet[2446]: E0117 12:19:34.030627 2446 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.246:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.110?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 17 12:19:34.119765 kubelet[2446]: E0117 12:19:34.119715 2446 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.21.110\": Get \"https://172.31.24.246:6443/api/v1/nodes/172.31.21.110?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 17 12:19:34.186461 kubelet[2446]: E0117 12:19:34.183847 2446 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.246:6443/api/v1/namespaces/kube-system/events\": unexpected EOF" event="&Event{ObjectMeta:{cilium-operator-5cc964979-ftktj.181b7a1c9b425172 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:cilium-operator-5cc964979-ftktj,UID:a14b8ff2-03ff-4f0f-adde-a77da33ac007,APIVersion:v1,ResourceVersion:912,FieldPath:spec.containers{cilium-operator},},Reason:Pulled,Message:Container image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" already present on machine,Source:EventSource{Component:kubelet,Host:172.31.21.110,},FirstTimestamp:2025-01-17 12:19:04.563945842 +0000 UTC m=+103.428312354,LastTimestamp:2025-01-17 12:19:04.563945842 +0000 UTC m=+103.428312354,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.21.110,}" Jan 17 12:19:34.186461 kubelet[2446]: E0117 12:19:34.186093 2446 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.246:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.110?timeout=10s\": unexpected EOF" Jan 17 12:19:34.186461 kubelet[2446]: I0117 12:19:34.186124 2446 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 17 12:19:34.936884 kubelet[2446]: E0117 12:19:34.936834 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:35.185840 kubelet[2446]: E0117 12:19:35.185790 2446 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.21.110\": Get \"https://172.31.24.246:6443/api/v1/nodes/172.31.21.110?timeout=10s\": dial tcp 172.31.24.246:6443: connect: connection refused - error from a previous attempt: unexpected EOF" Jan 17 12:19:35.185840 kubelet[2446]: E0117 12:19:35.185826 2446 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count" Jan 17 12:19:35.186054 kubelet[2446]: I0117 12:19:35.185890 2446 status_manager.go:853] "Failed to get status for pod" podUID="a14b8ff2-03ff-4f0f-adde-a77da33ac007" pod="kube-system/cilium-operator-5cc964979-ftktj" err="Get \"https://172.31.24.246:6443/api/v1/namespaces/kube-system/pods/cilium-operator-5cc964979-ftktj\": dial tcp 172.31.24.246:6443: connect: connection refused - error from a previous attempt: unexpected EOF" Jan 17 12:19:35.186563 kubelet[2446]: I0117 12:19:35.186536 2446 status_manager.go:853] "Failed to get status for pod" podUID="a14b8ff2-03ff-4f0f-adde-a77da33ac007" pod="kube-system/cilium-operator-5cc964979-ftktj" err="Get \"https://172.31.24.246:6443/api/v1/namespaces/kube-system/pods/cilium-operator-5cc964979-ftktj\": dial tcp 172.31.24.246:6443: connect: connection refused" Jan 17 12:19:35.187591 kubelet[2446]: I0117 12:19:35.187418 2446 status_manager.go:853] "Failed to get status for pod" podUID="a14b8ff2-03ff-4f0f-adde-a77da33ac007" pod="kube-system/cilium-operator-5cc964979-ftktj" err="Get \"https://172.31.24.246:6443/api/v1/namespaces/kube-system/pods/cilium-operator-5cc964979-ftktj\": dial tcp 172.31.24.246:6443: connect: connection refused" Jan 17 12:19:35.206779 kubelet[2446]: E0117 12:19:35.206735 2446 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.246:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.110?timeout=10s\": dial tcp 172.31.24.246:6443: connect: connection refused - error from a previous attempt: read tcp 172.31.21.110:37898->172.31.24.246:6443: read: connection reset by peer" interval="200ms" Jan 17 12:19:35.937419 kubelet[2446]: E0117 12:19:35.937362 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:36.937616 kubelet[2446]: E0117 12:19:36.937557 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:37.938417 kubelet[2446]: E0117 12:19:37.938363 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:38.939580 kubelet[2446]: E0117 12:19:38.939525 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:39.940317 kubelet[2446]: E0117 12:19:39.940263 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:40.941377 kubelet[2446]: E0117 12:19:40.941338 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:41.813973 kubelet[2446]: E0117 12:19:41.813920 2446 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:41.942644 kubelet[2446]: E0117 12:19:41.942591 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:42.943637 kubelet[2446]: E0117 12:19:42.943577 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:43.944490 kubelet[2446]: E0117 12:19:43.944437 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:44.944912 kubelet[2446]: E0117 12:19:44.944851 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:45.407679 kubelet[2446]: E0117 12:19:45.407636 2446 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.246:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.110?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="400ms" Jan 17 12:19:45.945808 kubelet[2446]: E0117 12:19:45.945753 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:46.946370 kubelet[2446]: E0117 12:19:46.946314 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:47.947331 kubelet[2446]: E0117 12:19:47.947280 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:48.947727 kubelet[2446]: E0117 12:19:48.947672 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:49.948903 kubelet[2446]: E0117 12:19:49.948847 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:50.949082 kubelet[2446]: E0117 12:19:50.949025 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:51.949963 kubelet[2446]: E0117 12:19:51.949912 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:52.950782 kubelet[2446]: E0117 12:19:52.950671 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:53.951495 kubelet[2446]: E0117 12:19:53.951441 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:54.952032 kubelet[2446]: E0117 12:19:54.951978 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 12:19:55.437199 kubelet[2446]: E0117 12:19:55.437122 2446 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.21.110\": Get \"https://172.31.24.246:6443/api/v1/nodes/172.31.21.110?resourceVersion=0&timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 17 12:19:55.808558 kubelet[2446]: E0117 12:19:55.808440 2446 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.246:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.110?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="800ms" Jan 17 12:19:55.952334 kubelet[2446]: E0117 12:19:55.952284 2446 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"