Jan 29 16:25:06.994560 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 14:51:22 -00 2025 Jan 29 16:25:06.994834 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:25:06.994851 kernel: BIOS-provided physical RAM map: Jan 29 16:25:06.994863 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 16:25:06.994874 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 16:25:06.994886 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 16:25:06.994903 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jan 29 16:25:06.994915 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jan 29 16:25:06.994927 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jan 29 16:25:06.994939 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 16:25:06.994952 kernel: NX (Execute Disable) protection: active Jan 29 16:25:06.994964 kernel: APIC: Static calls initialized Jan 29 16:25:06.994975 kernel: SMBIOS 2.7 present. Jan 29 16:25:06.994988 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 29 16:25:06.995006 kernel: Hypervisor detected: KVM Jan 29 16:25:06.995019 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 16:25:06.995032 kernel: kvm-clock: using sched offset of 7824390131 cycles Jan 29 16:25:06.995047 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 16:25:06.995060 kernel: tsc: Detected 2499.996 MHz processor Jan 29 16:25:06.995074 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 16:25:06.995088 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 16:25:06.995105 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jan 29 16:25:06.995118 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 16:25:06.995132 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 16:25:06.995211 kernel: Using GB pages for direct mapping Jan 29 16:25:06.995402 kernel: ACPI: Early table checksum verification disabled Jan 29 16:25:06.995417 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jan 29 16:25:06.995431 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jan 29 16:25:06.995445 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 29 16:25:06.995458 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 29 16:25:06.995485 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jan 29 16:25:06.995499 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 29 16:25:06.995513 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 29 16:25:06.995526 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 29 16:25:06.995541 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 29 16:25:06.995554 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 29 16:25:06.995568 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 29 16:25:06.995581 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 29 16:25:06.995595 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jan 29 16:25:06.995628 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jan 29 16:25:06.995649 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jan 29 16:25:06.995664 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jan 29 16:25:06.995678 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jan 29 16:25:06.995693 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jan 29 16:25:06.995710 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jan 29 16:25:06.995725 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jan 29 16:25:06.995740 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jan 29 16:25:06.995753 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jan 29 16:25:06.995765 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 29 16:25:06.995777 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 29 16:25:06.995790 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 29 16:25:06.995804 kernel: NUMA: Initialized distance table, cnt=1 Jan 29 16:25:07.000668 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jan 29 16:25:07.000703 kernel: Zone ranges: Jan 29 16:25:07.000719 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 16:25:07.000737 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jan 29 16:25:07.000752 kernel: Normal empty Jan 29 16:25:07.000766 kernel: Movable zone start for each node Jan 29 16:25:07.000781 kernel: Early memory node ranges Jan 29 16:25:07.000797 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 16:25:07.000811 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jan 29 16:25:07.000827 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jan 29 16:25:07.000842 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 16:25:07.000861 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 16:25:07.000876 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jan 29 16:25:07.000891 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 29 16:25:07.000905 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 16:25:07.000920 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 29 16:25:07.000935 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 16:25:07.000949 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 16:25:07.000962 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 16:25:07.000975 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 16:25:07.000993 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 16:25:07.001008 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 16:25:07.001021 kernel: TSC deadline timer available Jan 29 16:25:07.001035 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 16:25:07.001050 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 16:25:07.001065 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 29 16:25:07.001079 kernel: Booting paravirtualized kernel on KVM Jan 29 16:25:07.001093 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 16:25:07.001107 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 16:25:07.001125 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 16:25:07.001139 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 16:25:07.001153 kernel: pcpu-alloc: [0] 0 1 Jan 29 16:25:07.001167 kernel: kvm-guest: PV spinlocks enabled Jan 29 16:25:07.001182 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 16:25:07.001250 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:25:07.001397 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 16:25:07.001415 kernel: random: crng init done Jan 29 16:25:07.001435 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 16:25:07.001451 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 16:25:07.001466 kernel: Fallback order for Node 0: 0 Jan 29 16:25:07.001482 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jan 29 16:25:07.001497 kernel: Policy zone: DMA32 Jan 29 16:25:07.001513 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 16:25:07.001530 kernel: Memory: 1930300K/2057760K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43472K init, 1600K bss, 127200K reserved, 0K cma-reserved) Jan 29 16:25:07.001547 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 16:25:07.001562 kernel: Kernel/User page tables isolation: enabled Jan 29 16:25:07.001582 kernel: ftrace: allocating 37893 entries in 149 pages Jan 29 16:25:07.001598 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 16:25:07.001716 kernel: Dynamic Preempt: voluntary Jan 29 16:25:07.001734 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 16:25:07.001750 kernel: rcu: RCU event tracing is enabled. Jan 29 16:25:07.001767 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 16:25:07.001785 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 16:25:07.001802 kernel: Rude variant of Tasks RCU enabled. Jan 29 16:25:07.001817 kernel: Tracing variant of Tasks RCU enabled. Jan 29 16:25:07.001841 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 16:25:07.001858 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 16:25:07.001874 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 29 16:25:07.001892 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 16:25:07.001909 kernel: Console: colour VGA+ 80x25 Jan 29 16:25:07.001925 kernel: printk: console [ttyS0] enabled Jan 29 16:25:07.001943 kernel: ACPI: Core revision 20230628 Jan 29 16:25:07.001960 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 29 16:25:07.001977 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 16:25:07.001999 kernel: x2apic enabled Jan 29 16:25:07.002015 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 16:25:07.002046 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 29 16:25:07.002066 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Jan 29 16:25:07.002080 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 29 16:25:07.002095 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 29 16:25:07.002109 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 16:25:07.002124 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 16:25:07.002139 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 16:25:07.002158 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 16:25:07.002177 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 29 16:25:07.002195 kernel: RETBleed: Vulnerable Jan 29 16:25:07.002214 kernel: Speculative Store Bypass: Vulnerable Jan 29 16:25:07.002237 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 16:25:07.002256 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 16:25:07.002274 kernel: GDS: Unknown: Dependent on hypervisor status Jan 29 16:25:07.002292 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 16:25:07.002404 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 16:25:07.002425 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 16:25:07.002448 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 29 16:25:07.002466 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 29 16:25:07.002485 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 29 16:25:07.002502 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 29 16:25:07.002521 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 29 16:25:07.002539 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 29 16:25:07.002556 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 16:25:07.002574 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 29 16:25:07.002592 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 29 16:25:07.002671 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 29 16:25:07.002690 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 29 16:25:07.002712 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 29 16:25:07.002730 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 29 16:25:07.002749 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 29 16:25:07.002767 kernel: Freeing SMP alternatives memory: 32K Jan 29 16:25:07.002784 kernel: pid_max: default: 32768 minimum: 301 Jan 29 16:25:07.002803 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 16:25:07.002821 kernel: landlock: Up and running. Jan 29 16:25:07.002840 kernel: SELinux: Initializing. Jan 29 16:25:07.002858 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 16:25:07.002877 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 16:25:07.002895 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8175M CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x4) Jan 29 16:25:07.002917 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:25:07.002936 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:25:07.002955 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:25:07.002974 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 29 16:25:07.002992 kernel: signal: max sigframe size: 3632 Jan 29 16:25:07.003010 kernel: rcu: Hierarchical SRCU implementation. Jan 29 16:25:07.003030 kernel: rcu: Max phase no-delay instances is 400. Jan 29 16:25:07.003047 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 29 16:25:07.003131 kernel: smp: Bringing up secondary CPUs ... Jan 29 16:25:07.003154 kernel: smpboot: x86: Booting SMP configuration: Jan 29 16:25:07.003172 kernel: .... node #0, CPUs: #1 Jan 29 16:25:07.003193 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 29 16:25:07.003213 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 29 16:25:07.003230 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 16:25:07.003249 kernel: smpboot: Max logical packages: 1 Jan 29 16:25:07.003268 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Jan 29 16:25:07.003286 kernel: devtmpfs: initialized Jan 29 16:25:07.003343 kernel: x86/mm: Memory block size: 128MB Jan 29 16:25:07.003366 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 16:25:07.003384 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 16:25:07.003502 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 16:25:07.003517 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 16:25:07.003534 kernel: audit: initializing netlink subsys (disabled) Jan 29 16:25:07.003553 kernel: audit: type=2000 audit(1738167905.414:1): state=initialized audit_enabled=0 res=1 Jan 29 16:25:07.003570 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 16:25:07.003718 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 16:25:07.003737 kernel: cpuidle: using governor menu Jan 29 16:25:07.003762 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 16:25:07.003779 kernel: dca service started, version 1.12.1 Jan 29 16:25:07.003797 kernel: PCI: Using configuration type 1 for base access Jan 29 16:25:07.003824 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 16:25:07.003843 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 16:25:07.003862 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 16:25:07.003880 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 16:25:07.003897 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 16:25:07.003915 kernel: ACPI: Added _OSI(Module Device) Jan 29 16:25:07.003938 kernel: ACPI: Added _OSI(Processor Device) Jan 29 16:25:07.003956 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 16:25:07.003973 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 16:25:07.003992 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 29 16:25:07.004010 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 16:25:07.004035 kernel: ACPI: Interpreter enabled Jan 29 16:25:07.004053 kernel: ACPI: PM: (supports S0 S5) Jan 29 16:25:07.004073 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 16:25:07.004092 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 16:25:07.004113 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 16:25:07.004132 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 29 16:25:07.004150 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 16:25:07.004488 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 29 16:25:07.004893 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 29 16:25:07.005074 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 29 16:25:07.005096 kernel: acpiphp: Slot [3] registered Jan 29 16:25:07.005122 kernel: acpiphp: Slot [4] registered Jan 29 16:25:07.005147 kernel: acpiphp: Slot [5] registered Jan 29 16:25:07.005164 kernel: acpiphp: Slot [6] registered Jan 29 16:25:07.005183 kernel: acpiphp: Slot [7] registered Jan 29 16:25:07.005202 kernel: acpiphp: Slot [8] registered Jan 29 16:25:07.005220 kernel: acpiphp: Slot [9] registered Jan 29 16:25:07.005238 kernel: acpiphp: Slot [10] registered Jan 29 16:25:07.005256 kernel: acpiphp: Slot [11] registered Jan 29 16:25:07.005274 kernel: acpiphp: Slot [12] registered Jan 29 16:25:07.006820 kernel: acpiphp: Slot [13] registered Jan 29 16:25:07.006846 kernel: acpiphp: Slot [14] registered Jan 29 16:25:07.006865 kernel: acpiphp: Slot [15] registered Jan 29 16:25:07.006883 kernel: acpiphp: Slot [16] registered Jan 29 16:25:07.006902 kernel: acpiphp: Slot [17] registered Jan 29 16:25:07.006921 kernel: acpiphp: Slot [18] registered Jan 29 16:25:07.006939 kernel: acpiphp: Slot [19] registered Jan 29 16:25:07.006956 kernel: acpiphp: Slot [20] registered Jan 29 16:25:07.006976 kernel: acpiphp: Slot [21] registered Jan 29 16:25:07.006993 kernel: acpiphp: Slot [22] registered Jan 29 16:25:07.007017 kernel: acpiphp: Slot [23] registered Jan 29 16:25:07.007043 kernel: acpiphp: Slot [24] registered Jan 29 16:25:07.007061 kernel: acpiphp: Slot [25] registered Jan 29 16:25:07.007079 kernel: acpiphp: Slot [26] registered Jan 29 16:25:07.007096 kernel: acpiphp: Slot [27] registered Jan 29 16:25:07.007114 kernel: acpiphp: Slot [28] registered Jan 29 16:25:07.007132 kernel: acpiphp: Slot [29] registered Jan 29 16:25:07.007149 kernel: acpiphp: Slot [30] registered Jan 29 16:25:07.007167 kernel: acpiphp: Slot [31] registered Jan 29 16:25:07.007190 kernel: PCI host bridge to bus 0000:00 Jan 29 16:25:07.007597 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 16:25:07.011812 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 16:25:07.011960 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 16:25:07.012086 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 29 16:25:07.015080 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 16:25:07.015403 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 29 16:25:07.015588 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 29 16:25:07.015756 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 29 16:25:07.015892 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 29 16:25:07.016033 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jan 29 16:25:07.016174 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 29 16:25:07.016445 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 29 16:25:07.016585 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 29 16:25:07.018721 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 29 16:25:07.018899 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 29 16:25:07.019050 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 29 16:25:07.019197 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x180 took 11718 usecs Jan 29 16:25:07.019602 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 29 16:25:07.019767 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jan 29 16:25:07.019922 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 29 16:25:07.020064 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 16:25:07.020206 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 29 16:25:07.020469 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jan 29 16:25:07.025115 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 29 16:25:07.025433 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jan 29 16:25:07.025462 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 16:25:07.025480 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 16:25:07.025504 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 16:25:07.025520 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 16:25:07.025536 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 29 16:25:07.025552 kernel: iommu: Default domain type: Translated Jan 29 16:25:07.025569 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 16:25:07.025585 kernel: PCI: Using ACPI for IRQ routing Jan 29 16:25:07.025601 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 16:25:07.025641 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 16:25:07.025657 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jan 29 16:25:07.025817 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 29 16:25:07.025956 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 29 16:25:07.026169 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 16:25:07.026192 kernel: vgaarb: loaded Jan 29 16:25:07.026209 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 29 16:25:07.026225 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 29 16:25:07.026242 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 16:25:07.026491 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 16:25:07.026518 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 16:25:07.026534 kernel: pnp: PnP ACPI init Jan 29 16:25:07.026550 kernel: pnp: PnP ACPI: found 5 devices Jan 29 16:25:07.028666 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 16:25:07.028694 kernel: NET: Registered PF_INET protocol family Jan 29 16:25:07.028709 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 16:25:07.028722 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 29 16:25:07.028736 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 16:25:07.028751 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 16:25:07.028771 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 29 16:25:07.028784 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 29 16:25:07.028798 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 16:25:07.028812 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 16:25:07.028827 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 16:25:07.028842 kernel: NET: Registered PF_XDP protocol family Jan 29 16:25:07.029089 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 16:25:07.029410 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 16:25:07.029553 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 16:25:07.031865 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 29 16:25:07.032025 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 29 16:25:07.032047 kernel: PCI: CLS 0 bytes, default 64 Jan 29 16:25:07.032064 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 29 16:25:07.032080 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 29 16:25:07.032096 kernel: clocksource: Switched to clocksource tsc Jan 29 16:25:07.032112 kernel: Initialise system trusted keyrings Jan 29 16:25:07.032134 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 29 16:25:07.032149 kernel: Key type asymmetric registered Jan 29 16:25:07.032164 kernel: Asymmetric key parser 'x509' registered Jan 29 16:25:07.032180 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 16:25:07.032195 kernel: io scheduler mq-deadline registered Jan 29 16:25:07.032211 kernel: io scheduler kyber registered Jan 29 16:25:07.032226 kernel: io scheduler bfq registered Jan 29 16:25:07.032241 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 16:25:07.032258 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 16:25:07.032381 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 16:25:07.032403 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 16:25:07.032419 kernel: i8042: Warning: Keylock active Jan 29 16:25:07.032434 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 16:25:07.032449 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 16:25:07.032711 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 29 16:25:07.032846 kernel: rtc_cmos 00:00: registered as rtc0 Jan 29 16:25:07.033009 kernel: rtc_cmos 00:00: setting system clock to 2025-01-29T16:25:06 UTC (1738167906) Jan 29 16:25:07.033149 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 29 16:25:07.033242 kernel: intel_pstate: CPU model not supported Jan 29 16:25:07.033425 kernel: NET: Registered PF_INET6 protocol family Jan 29 16:25:07.033444 kernel: Segment Routing with IPv6 Jan 29 16:25:07.033462 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 16:25:07.033523 kernel: NET: Registered PF_PACKET protocol family Jan 29 16:25:07.033542 kernel: Key type dns_resolver registered Jan 29 16:25:07.033558 kernel: IPI shorthand broadcast: enabled Jan 29 16:25:07.033742 kernel: sched_clock: Marking stable (664090054, 230937313)->(994825920, -99798553) Jan 29 16:25:07.033764 kernel: registered taskstats version 1 Jan 29 16:25:07.033779 kernel: Loading compiled-in X.509 certificates Jan 29 16:25:07.033795 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 68134fdf6dac3690da6e3bc9c22b042a5c364340' Jan 29 16:25:07.033811 kernel: Key type .fscrypt registered Jan 29 16:25:07.033827 kernel: Key type fscrypt-provisioning registered Jan 29 16:25:07.033844 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 16:25:07.033860 kernel: ima: Allocated hash algorithm: sha1 Jan 29 16:25:07.033876 kernel: ima: No architecture policies found Jan 29 16:25:07.033893 kernel: clk: Disabling unused clocks Jan 29 16:25:07.033913 kernel: Freeing unused kernel image (initmem) memory: 43472K Jan 29 16:25:07.033929 kernel: Write protecting the kernel read-only data: 38912k Jan 29 16:25:07.033945 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K Jan 29 16:25:07.033962 kernel: Run /init as init process Jan 29 16:25:07.033978 kernel: with arguments: Jan 29 16:25:07.033993 kernel: /init Jan 29 16:25:07.034007 kernel: with environment: Jan 29 16:25:07.034022 kernel: HOME=/ Jan 29 16:25:07.034037 kernel: TERM=linux Jan 29 16:25:07.034057 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 16:25:07.034143 systemd[1]: Successfully made /usr/ read-only. Jan 29 16:25:07.034165 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:25:07.034183 systemd[1]: Detected virtualization amazon. Jan 29 16:25:07.034199 systemd[1]: Detected architecture x86-64. Jan 29 16:25:07.034216 systemd[1]: Running in initrd. Jan 29 16:25:07.034232 systemd[1]: No hostname configured, using default hostname. Jan 29 16:25:07.034252 systemd[1]: Hostname set to . Jan 29 16:25:07.034269 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:25:07.034423 systemd[1]: Queued start job for default target initrd.target. Jan 29 16:25:07.034443 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:25:07.034464 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:25:07.034484 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 16:25:07.034501 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:25:07.034517 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 16:25:07.034539 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 16:25:07.034558 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 16:25:07.034575 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 16:25:07.034592 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:25:07.034639 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:25:07.034657 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:25:07.034672 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:25:07.034694 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:25:07.034712 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:25:07.034730 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:25:07.034748 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:25:07.034765 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 16:25:07.034782 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 29 16:25:07.034798 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:25:07.034817 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:25:07.034835 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:25:07.034856 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:25:07.034875 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 16:25:07.034892 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:25:07.034911 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 16:25:07.034932 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 16:25:07.034953 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:25:07.034971 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:25:07.034990 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:25:07.035038 systemd-journald[179]: Collecting audit messages is disabled. Jan 29 16:25:07.035199 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 16:25:07.035219 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:25:07.035239 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 16:25:07.035426 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 16:25:07.035456 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:25:07.035489 systemd-journald[179]: Journal started Jan 29 16:25:07.037308 systemd-journald[179]: Runtime Journal (/run/log/journal/ec29e2c6003bd3266716d8af0ed609ac) is 4.8M, max 38.5M, 33.7M free. Jan 29 16:25:07.039625 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:25:07.028724 systemd-modules-load[180]: Inserted module 'overlay' Jan 29 16:25:07.213218 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 16:25:07.213270 kernel: Bridge firewalling registered Jan 29 16:25:07.060097 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:25:07.110853 systemd-modules-load[180]: Inserted module 'br_netfilter' Jan 29 16:25:07.214939 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:25:07.225263 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:25:07.229791 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:25:07.248990 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:25:07.263051 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:25:07.282528 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:25:07.292228 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:25:07.299112 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:25:07.313971 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:25:07.319041 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:25:07.323845 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:25:07.327090 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 16:25:07.376131 dracut-cmdline[215]: dracut-dracut-053 Jan 29 16:25:07.395810 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:25:07.420330 systemd-resolved[211]: Positive Trust Anchors: Jan 29 16:25:07.420346 systemd-resolved[211]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:25:07.420408 systemd-resolved[211]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:25:07.426139 systemd-resolved[211]: Defaulting to hostname 'linux'. Jan 29 16:25:07.427774 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:25:07.434434 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:25:07.501637 kernel: SCSI subsystem initialized Jan 29 16:25:07.512640 kernel: Loading iSCSI transport class v2.0-870. Jan 29 16:25:07.523638 kernel: iscsi: registered transport (tcp) Jan 29 16:25:07.545174 kernel: iscsi: registered transport (qla4xxx) Jan 29 16:25:07.545375 kernel: QLogic iSCSI HBA Driver Jan 29 16:25:07.584251 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 16:25:07.592777 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 16:25:07.618293 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 16:25:07.618359 kernel: device-mapper: uevent: version 1.0.3 Jan 29 16:25:07.618373 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 16:25:07.661647 kernel: raid6: avx512x4 gen() 10230 MB/s Jan 29 16:25:07.678646 kernel: raid6: avx512x2 gen() 17485 MB/s Jan 29 16:25:07.695647 kernel: raid6: avx512x1 gen() 14083 MB/s Jan 29 16:25:07.712643 kernel: raid6: avx2x4 gen() 17018 MB/s Jan 29 16:25:07.729656 kernel: raid6: avx2x2 gen() 15427 MB/s Jan 29 16:25:07.747102 kernel: raid6: avx2x1 gen() 11211 MB/s Jan 29 16:25:07.747182 kernel: raid6: using algorithm avx512x2 gen() 17485 MB/s Jan 29 16:25:07.764660 kernel: raid6: .... xor() 22873 MB/s, rmw enabled Jan 29 16:25:07.764756 kernel: raid6: using avx512x2 recovery algorithm Jan 29 16:25:07.791634 kernel: xor: automatically using best checksumming function avx Jan 29 16:25:07.969635 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 16:25:07.980887 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:25:07.988193 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:25:08.016373 systemd-udevd[398]: Using default interface naming scheme 'v255'. Jan 29 16:25:08.022710 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:25:08.034366 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 16:25:08.057323 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Jan 29 16:25:08.112835 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:25:08.121883 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:25:08.189952 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:25:08.201850 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 16:25:08.232956 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 16:25:08.238502 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:25:08.242592 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:25:08.246957 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:25:08.254846 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 16:25:08.288999 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:25:08.326634 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 16:25:08.337873 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 29 16:25:08.362797 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 29 16:25:08.362990 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 16:25:08.363012 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 29 16:25:08.363174 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:06:08:2a:d1:b7 Jan 29 16:25:08.349581 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:25:08.349778 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:25:08.351922 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:25:08.373820 kernel: AES CTR mode by8 optimization enabled Jan 29 16:25:08.354252 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:25:08.354422 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:25:08.356478 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:25:08.367645 (udev-worker)[444]: Network interface NamePolicy= disabled on kernel command line. Jan 29 16:25:08.377054 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:25:08.380927 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:25:08.401989 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 29 16:25:08.402334 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 29 16:25:08.420654 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 29 16:25:08.433639 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 16:25:08.433705 kernel: GPT:9289727 != 16777215 Jan 29 16:25:08.433723 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 16:25:08.433741 kernel: GPT:9289727 != 16777215 Jan 29 16:25:08.433757 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 16:25:08.433781 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 16:25:08.520698 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (453) Jan 29 16:25:08.524628 kernel: BTRFS: device fsid b756ea5d-2d08-456f-8231-a684aa2555c3 devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (452) Jan 29 16:25:08.632057 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:25:08.642999 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:25:08.702648 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 29 16:25:08.703126 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:25:08.722840 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 29 16:25:08.754659 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 29 16:25:08.767236 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 29 16:25:08.767409 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 29 16:25:08.780060 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 16:25:08.804757 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 16:25:08.804826 disk-uuid[629]: Primary Header is updated. Jan 29 16:25:08.804826 disk-uuid[629]: Secondary Entries is updated. Jan 29 16:25:08.804826 disk-uuid[629]: Secondary Header is updated. Jan 29 16:25:09.834686 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 16:25:09.835279 disk-uuid[630]: The operation has completed successfully. Jan 29 16:25:10.025151 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 16:25:10.025277 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 16:25:10.066818 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 16:25:10.083460 sh[890]: Success Jan 29 16:25:10.107841 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 29 16:25:10.230319 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 16:25:10.240857 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 16:25:10.254347 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 16:25:10.301968 kernel: BTRFS info (device dm-0): first mount of filesystem b756ea5d-2d08-456f-8231-a684aa2555c3 Jan 29 16:25:10.302056 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:25:10.302083 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 16:25:10.303401 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 16:25:10.304299 kernel: BTRFS info (device dm-0): using free space tree Jan 29 16:25:10.319635 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 16:25:10.325775 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 16:25:10.326984 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 16:25:10.336849 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 16:25:10.342714 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 16:25:10.378154 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:25:10.378238 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:25:10.378262 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 16:25:10.385670 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 16:25:10.405880 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 16:25:10.408344 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:25:10.420867 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 16:25:10.436847 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 16:25:10.516879 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:25:10.534977 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:25:10.606408 systemd-networkd[1083]: lo: Link UP Jan 29 16:25:10.606420 systemd-networkd[1083]: lo: Gained carrier Jan 29 16:25:10.619694 systemd-networkd[1083]: Enumeration completed Jan 29 16:25:10.619852 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:25:10.621415 systemd[1]: Reached target network.target - Network. Jan 29 16:25:10.622905 systemd-networkd[1083]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:25:10.622912 systemd-networkd[1083]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:25:10.634743 systemd-networkd[1083]: eth0: Link UP Jan 29 16:25:10.634750 systemd-networkd[1083]: eth0: Gained carrier Jan 29 16:25:10.634768 systemd-networkd[1083]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:25:10.653704 systemd-networkd[1083]: eth0: DHCPv4 address 172.31.23.123/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 29 16:25:10.664186 ignition[1023]: Ignition 2.20.0 Jan 29 16:25:10.664201 ignition[1023]: Stage: fetch-offline Jan 29 16:25:10.664425 ignition[1023]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:25:10.664438 ignition[1023]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 16:25:10.666626 ignition[1023]: Ignition finished successfully Jan 29 16:25:10.671223 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:25:10.678104 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 16:25:10.698210 ignition[1094]: Ignition 2.20.0 Jan 29 16:25:10.698222 ignition[1094]: Stage: fetch Jan 29 16:25:10.698550 ignition[1094]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:25:10.698559 ignition[1094]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 16:25:10.698651 ignition[1094]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 16:25:10.838132 ignition[1094]: PUT result: OK Jan 29 16:25:10.841130 ignition[1094]: parsed url from cmdline: "" Jan 29 16:25:10.841161 ignition[1094]: no config URL provided Jan 29 16:25:10.841172 ignition[1094]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:25:10.841187 ignition[1094]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:25:10.841212 ignition[1094]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 16:25:10.843871 ignition[1094]: PUT result: OK Jan 29 16:25:10.843938 ignition[1094]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 29 16:25:10.849274 ignition[1094]: GET result: OK Jan 29 16:25:10.849331 ignition[1094]: parsing config with SHA512: fba92ae4c06692351eacd679993b8ba22c5cc7518b50bf3bab08ee78ba87cd594e1db8bd82a9af21e1234a99082e06c26bb1592a218c7a555537d1f20044aff7 Jan 29 16:25:10.858475 unknown[1094]: fetched base config from "system" Jan 29 16:25:10.858485 unknown[1094]: fetched base config from "system" Jan 29 16:25:10.858752 ignition[1094]: fetch: fetch complete Jan 29 16:25:10.858490 unknown[1094]: fetched user config from "aws" Jan 29 16:25:10.858756 ignition[1094]: fetch: fetch passed Jan 29 16:25:10.858803 ignition[1094]: Ignition finished successfully Jan 29 16:25:10.865649 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 16:25:10.876957 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 16:25:10.902164 ignition[1100]: Ignition 2.20.0 Jan 29 16:25:10.902181 ignition[1100]: Stage: kargs Jan 29 16:25:10.902595 ignition[1100]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:25:10.902635 ignition[1100]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 16:25:10.902760 ignition[1100]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 16:25:10.904336 ignition[1100]: PUT result: OK Jan 29 16:25:10.913423 ignition[1100]: kargs: kargs passed Jan 29 16:25:10.913507 ignition[1100]: Ignition finished successfully Jan 29 16:25:10.916625 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 16:25:10.923800 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 16:25:10.947689 ignition[1106]: Ignition 2.20.0 Jan 29 16:25:10.947705 ignition[1106]: Stage: disks Jan 29 16:25:10.948138 ignition[1106]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:25:10.948155 ignition[1106]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 16:25:10.948300 ignition[1106]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 16:25:10.949978 ignition[1106]: PUT result: OK Jan 29 16:25:10.957668 ignition[1106]: disks: disks passed Jan 29 16:25:10.957792 ignition[1106]: Ignition finished successfully Jan 29 16:25:10.960283 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 16:25:10.960864 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 16:25:10.966733 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 16:25:10.970626 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:25:10.972185 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:25:10.974982 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:25:10.982833 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 16:25:11.018145 systemd-fsck[1115]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 16:25:11.022571 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 16:25:11.303773 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 16:25:11.490633 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 93ea9bb6-d6ba-4a18-a828-f0002683a7b4 r/w with ordered data mode. Quota mode: none. Jan 29 16:25:11.496988 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 16:25:11.498247 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 16:25:11.519752 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:25:11.524734 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 16:25:11.527185 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 16:25:11.527268 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 16:25:11.527309 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:25:11.558735 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 16:25:11.571051 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1134) Jan 29 16:25:11.584050 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:25:11.584136 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:25:11.584157 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 16:25:11.584883 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 16:25:11.592629 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 16:25:11.594903 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:25:11.715268 initrd-setup-root[1158]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 16:25:11.729708 initrd-setup-root[1165]: cut: /sysroot/etc/group: No such file or directory Jan 29 16:25:11.740262 initrd-setup-root[1172]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 16:25:11.747450 initrd-setup-root[1179]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 16:25:11.883097 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 16:25:11.894793 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 16:25:11.902008 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 16:25:11.907633 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:25:11.934754 systemd-networkd[1083]: eth0: Gained IPv6LL Jan 29 16:25:11.946544 ignition[1246]: INFO : Ignition 2.20.0 Jan 29 16:25:11.947989 ignition[1246]: INFO : Stage: mount Jan 29 16:25:11.947989 ignition[1246]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:25:11.947989 ignition[1246]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 16:25:11.947989 ignition[1246]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 16:25:11.953902 ignition[1246]: INFO : PUT result: OK Jan 29 16:25:11.955945 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 16:25:11.960634 ignition[1246]: INFO : mount: mount passed Jan 29 16:25:11.961720 ignition[1246]: INFO : Ignition finished successfully Jan 29 16:25:11.963571 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 16:25:11.970776 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 16:25:12.293056 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 16:25:12.305875 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:25:12.335641 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1260) Jan 29 16:25:12.348023 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:25:12.348162 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:25:12.348188 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 16:25:12.364782 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 16:25:12.369301 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:25:12.439094 ignition[1277]: INFO : Ignition 2.20.0 Jan 29 16:25:12.439094 ignition[1277]: INFO : Stage: files Jan 29 16:25:12.444708 ignition[1277]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:25:12.444708 ignition[1277]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 16:25:12.444708 ignition[1277]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 16:25:12.459014 ignition[1277]: INFO : PUT result: OK Jan 29 16:25:12.462430 ignition[1277]: DEBUG : files: compiled without relabeling support, skipping Jan 29 16:25:12.465233 ignition[1277]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 16:25:12.465233 ignition[1277]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 16:25:12.470714 ignition[1277]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 16:25:12.472754 ignition[1277]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 16:25:12.475903 ignition[1277]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 16:25:12.475805 unknown[1277]: wrote ssh authorized keys file for user: core Jan 29 16:25:12.482267 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 29 16:25:12.486100 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 16:25:12.486100 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:25:12.486100 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:25:12.486100 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 16:25:12.486100 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 16:25:12.486100 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 16:25:12.486100 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 29 16:25:12.944804 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 29 16:25:13.448460 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 16:25:13.452258 ignition[1277]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:25:13.452258 ignition[1277]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:25:13.452258 ignition[1277]: INFO : files: files passed Jan 29 16:25:13.452258 ignition[1277]: INFO : Ignition finished successfully Jan 29 16:25:13.461709 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 16:25:13.466838 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 16:25:13.469820 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 16:25:13.482423 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 16:25:13.482561 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 16:25:13.494561 initrd-setup-root-after-ignition[1305]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:25:13.498520 initrd-setup-root-after-ignition[1305]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:25:13.501666 initrd-setup-root-after-ignition[1309]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:25:13.502144 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:25:13.505902 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 16:25:13.514769 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 16:25:13.542239 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 16:25:13.542478 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 16:25:13.546257 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 16:25:13.548991 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 16:25:13.549180 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 16:25:13.551746 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 16:25:13.578961 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:25:13.586821 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 16:25:13.613346 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:25:13.613636 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:25:13.619783 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 16:25:13.622553 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 16:25:13.622783 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:25:13.626922 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 16:25:13.629931 systemd[1]: Stopped target basic.target - Basic System. Jan 29 16:25:13.632310 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 16:25:13.632531 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:25:13.632755 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 16:25:13.632954 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 16:25:13.633235 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:25:13.633660 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 16:25:13.633852 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 16:25:13.634052 systemd[1]: Stopped target swap.target - Swaps. Jan 29 16:25:13.634233 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 16:25:13.634430 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:25:13.635025 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:25:13.635236 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:25:13.635393 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 16:25:13.649345 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:25:13.652330 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 16:25:13.652463 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 16:25:13.660652 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 16:25:13.664646 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:25:13.671652 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 16:25:13.673556 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 16:25:13.691873 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 16:25:13.693416 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 16:25:13.693554 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:25:13.699999 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 16:25:13.708303 ignition[1329]: INFO : Ignition 2.20.0 Jan 29 16:25:13.708303 ignition[1329]: INFO : Stage: umount Jan 29 16:25:13.719981 ignition[1329]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:25:13.719981 ignition[1329]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 16:25:13.719981 ignition[1329]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 16:25:13.719981 ignition[1329]: INFO : PUT result: OK Jan 29 16:25:13.713978 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 16:25:13.714253 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:25:13.716595 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 16:25:13.736733 ignition[1329]: INFO : umount: umount passed Jan 29 16:25:13.736733 ignition[1329]: INFO : Ignition finished successfully Jan 29 16:25:13.718113 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:25:13.736807 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 16:25:13.736961 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 16:25:13.740224 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 16:25:13.740362 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 16:25:13.744358 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 16:25:13.744497 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 16:25:13.746562 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 16:25:13.746622 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 16:25:13.753973 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 16:25:13.754025 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 16:25:13.757801 systemd[1]: Stopped target network.target - Network. Jan 29 16:25:13.760375 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 16:25:13.762152 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:25:13.768390 systemd[1]: Stopped target paths.target - Path Units. Jan 29 16:25:13.773574 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 16:25:13.778116 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:25:13.782796 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 16:25:13.782897 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 16:25:13.788440 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 16:25:13.788511 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:25:13.792257 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 16:25:13.794835 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:25:13.797382 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 16:25:13.797593 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 16:25:13.801355 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 16:25:13.801456 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 16:25:13.805151 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 16:25:13.809781 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 16:25:13.815689 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 16:25:13.817678 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 16:25:13.817766 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 16:25:13.821560 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 29 16:25:13.822107 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 16:25:13.822185 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:25:13.842836 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:25:13.843159 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 16:25:13.843249 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 16:25:13.848462 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 29 16:25:13.849488 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 16:25:13.849575 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:25:13.863743 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 16:25:13.866131 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 16:25:13.866217 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:25:13.868782 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:25:13.868846 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:25:13.885967 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 16:25:13.886319 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 16:25:13.889061 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:25:13.894568 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 29 16:25:13.910090 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 16:25:13.910489 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:25:13.921473 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 16:25:13.921623 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 16:25:13.924630 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 16:25:13.924735 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 16:25:13.931987 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 16:25:13.932057 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:25:13.937227 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 16:25:13.937640 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:25:13.947271 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 16:25:13.947394 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 16:25:13.952151 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:25:13.952229 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:25:13.958021 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 16:25:13.958102 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 16:25:13.968866 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 16:25:13.971365 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 16:25:13.971564 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:25:13.983604 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:25:13.983806 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:25:13.991585 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 16:25:13.993438 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 16:25:13.998097 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 16:25:14.002442 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 16:25:14.007890 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 16:25:14.015774 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 16:25:14.035932 systemd[1]: Switching root. Jan 29 16:25:14.078646 systemd-journald[179]: Journal stopped Jan 29 16:25:15.892050 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Jan 29 16:25:15.892152 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 16:25:15.892181 kernel: SELinux: policy capability open_perms=1 Jan 29 16:25:15.892211 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 16:25:15.892248 kernel: SELinux: policy capability always_check_network=0 Jan 29 16:25:15.892270 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 16:25:15.892293 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 16:25:15.892315 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 16:25:15.892337 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 16:25:15.892369 kernel: audit: type=1403 audit(1738167914.299:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 16:25:15.892397 systemd[1]: Successfully loaded SELinux policy in 52.884ms. Jan 29 16:25:15.892430 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 25.498ms. Jan 29 16:25:15.892457 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:25:15.892485 systemd[1]: Detected virtualization amazon. Jan 29 16:25:15.892510 systemd[1]: Detected architecture x86-64. Jan 29 16:25:15.892539 systemd[1]: Detected first boot. Jan 29 16:25:15.892563 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:25:15.892589 zram_generator::config[1373]: No configuration found. Jan 29 16:25:15.894342 kernel: Guest personality initialized and is inactive Jan 29 16:25:15.894377 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jan 29 16:25:15.894400 kernel: Initialized host personality Jan 29 16:25:15.894424 kernel: NET: Registered PF_VSOCK protocol family Jan 29 16:25:15.894449 systemd[1]: Populated /etc with preset unit settings. Jan 29 16:25:15.894484 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 29 16:25:15.894510 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 16:25:15.894535 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 16:25:15.894561 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 16:25:15.894594 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 16:25:15.894635 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 16:25:15.894654 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 16:25:15.894672 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 16:25:15.894694 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 16:25:15.894720 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 16:25:15.894746 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 16:25:15.894772 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 16:25:15.894798 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:25:15.894825 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:25:15.894861 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 16:25:15.894885 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 16:25:15.894911 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 16:25:15.894938 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:25:15.894961 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 16:25:15.894985 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:25:15.895008 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 16:25:15.895031 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 16:25:15.895054 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 16:25:15.895078 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 16:25:15.895101 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:25:15.895127 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:25:15.895151 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:25:15.895175 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:25:15.895199 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 16:25:15.895222 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 16:25:15.895245 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 29 16:25:15.895268 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:25:15.895291 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:25:15.895314 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:25:15.895341 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 16:25:15.895364 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 16:25:15.895395 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 16:25:15.895421 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 16:25:15.895464 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:25:15.895488 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 16:25:15.895513 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 16:25:15.895537 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 16:25:15.895562 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 16:25:15.895590 systemd[1]: Reached target machines.target - Containers. Jan 29 16:25:15.895682 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 16:25:15.895703 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:25:15.895723 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:25:15.895741 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 16:25:15.895759 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:25:15.895777 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:25:15.895796 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:25:15.895818 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 16:25:15.895841 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:25:15.895867 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 16:25:15.895892 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 16:25:15.895917 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 16:25:15.895942 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 16:25:15.895966 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 16:25:15.895992 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:25:15.896021 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:25:15.896045 kernel: loop: module loaded Jan 29 16:25:15.896071 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:25:15.896096 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 16:25:15.896121 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 16:25:15.896147 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 29 16:25:15.896170 kernel: fuse: init (API version 7.39) Jan 29 16:25:15.896239 systemd-journald[1456]: Collecting audit messages is disabled. Jan 29 16:25:15.896288 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:25:15.896314 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 16:25:15.896340 systemd[1]: Stopped verity-setup.service. Jan 29 16:25:15.896365 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:25:15.896392 systemd-journald[1456]: Journal started Jan 29 16:25:15.896441 systemd-journald[1456]: Runtime Journal (/run/log/journal/ec29e2c6003bd3266716d8af0ed609ac) is 4.8M, max 38.5M, 33.7M free. Jan 29 16:25:15.365280 systemd[1]: Queued start job for default target multi-user.target. Jan 29 16:25:15.374277 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 29 16:25:15.374805 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 16:25:15.904078 kernel: ACPI: bus type drm_connector registered Jan 29 16:25:15.906625 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:25:15.916747 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 16:25:15.918599 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 16:25:15.920663 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 16:25:15.923871 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 16:25:15.925577 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 16:25:15.927357 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 16:25:15.929138 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 16:25:15.931082 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:25:15.933014 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 16:25:15.933178 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 16:25:15.935004 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:25:15.935256 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:25:15.937181 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:25:15.937429 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:25:15.939120 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:25:15.939422 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:25:15.942341 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 16:25:15.942505 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 16:25:15.944176 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:25:15.944330 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:25:15.946093 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:25:15.947697 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 16:25:15.950333 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 16:25:15.953168 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 29 16:25:15.971376 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 16:25:15.981728 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 16:25:15.989149 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 16:25:15.990762 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 16:25:15.990815 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:25:15.995047 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 29 16:25:16.006054 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 16:25:16.010947 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 16:25:16.014486 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:25:16.017828 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 16:25:16.023918 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 16:25:16.027002 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:25:16.031295 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 16:25:16.034485 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:25:16.041846 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:25:16.052727 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 16:25:16.058648 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 16:25:16.064017 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 16:25:16.069905 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 16:25:16.072671 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 16:25:16.084530 systemd-journald[1456]: Time spent on flushing to /var/log/journal/ec29e2c6003bd3266716d8af0ed609ac is 69.456ms for 956 entries. Jan 29 16:25:16.084530 systemd-journald[1456]: System Journal (/var/log/journal/ec29e2c6003bd3266716d8af0ed609ac) is 8M, max 195.6M, 187.6M free. Jan 29 16:25:16.171852 systemd-journald[1456]: Received client request to flush runtime journal. Jan 29 16:25:16.173692 kernel: loop0: detected capacity change from 0 to 62832 Jan 29 16:25:16.122517 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 16:25:16.125166 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 16:25:16.131852 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 29 16:25:16.175159 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:25:16.178033 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 16:25:16.195399 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 16:25:16.197881 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:25:16.199888 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 29 16:25:16.225335 udevadm[1518]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 16:25:16.239972 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 16:25:16.240696 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 16:25:16.250754 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:25:16.270663 kernel: loop1: detected capacity change from 0 to 138176 Jan 29 16:25:16.313151 systemd-tmpfiles[1524]: ACLs are not supported, ignoring. Jan 29 16:25:16.313179 systemd-tmpfiles[1524]: ACLs are not supported, ignoring. Jan 29 16:25:16.325869 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:25:16.338653 kernel: loop2: detected capacity change from 0 to 210664 Jan 29 16:25:16.378870 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 16:25:16.431644 kernel: loop3: detected capacity change from 0 to 147912 Jan 29 16:25:16.501637 kernel: loop4: detected capacity change from 0 to 62832 Jan 29 16:25:16.530645 kernel: loop5: detected capacity change from 0 to 138176 Jan 29 16:25:16.583639 kernel: loop6: detected capacity change from 0 to 210664 Jan 29 16:25:16.623644 kernel: loop7: detected capacity change from 0 to 147912 Jan 29 16:25:16.686459 (sd-merge)[1531]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 29 16:25:16.687411 (sd-merge)[1531]: Merged extensions into '/usr'. Jan 29 16:25:16.696464 systemd[1]: Reload requested from client PID 1505 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 16:25:16.696483 systemd[1]: Reloading... Jan 29 16:25:16.829649 zram_generator::config[1565]: No configuration found. Jan 29 16:25:17.073658 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:25:17.142263 ldconfig[1500]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 16:25:17.200503 systemd[1]: Reloading finished in 503 ms. Jan 29 16:25:17.220141 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 16:25:17.224361 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 16:25:17.240880 systemd[1]: Starting ensure-sysext.service... Jan 29 16:25:17.245849 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:25:17.274432 systemd[1]: Reload requested from client PID 1608 ('systemctl') (unit ensure-sysext.service)... Jan 29 16:25:17.274469 systemd[1]: Reloading... Jan 29 16:25:17.275106 systemd-tmpfiles[1609]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 16:25:17.275531 systemd-tmpfiles[1609]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 16:25:17.281907 systemd-tmpfiles[1609]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 16:25:17.282545 systemd-tmpfiles[1609]: ACLs are not supported, ignoring. Jan 29 16:25:17.283744 systemd-tmpfiles[1609]: ACLs are not supported, ignoring. Jan 29 16:25:17.295781 systemd-tmpfiles[1609]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:25:17.295941 systemd-tmpfiles[1609]: Skipping /boot Jan 29 16:25:17.320146 systemd-tmpfiles[1609]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:25:17.320168 systemd-tmpfiles[1609]: Skipping /boot Jan 29 16:25:17.374641 zram_generator::config[1634]: No configuration found. Jan 29 16:25:17.522073 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:25:17.597318 systemd[1]: Reloading finished in 322 ms. Jan 29 16:25:17.612529 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 16:25:17.626011 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:25:17.641962 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:25:17.645854 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 16:25:17.652716 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 16:25:17.661064 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:25:17.667146 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:25:17.671594 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 16:25:17.676543 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:25:17.676765 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:25:17.682460 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:25:17.689885 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:25:17.699646 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:25:17.702290 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:25:17.702473 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:25:17.716211 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 16:25:17.717733 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:25:17.727785 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:25:17.728137 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:25:17.728411 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:25:17.728559 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:25:17.728744 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:25:17.737481 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:25:17.738360 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:25:17.746087 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:25:17.777379 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:25:17.777580 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:25:17.778312 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 16:25:17.782868 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:25:17.801442 systemd[1]: Finished ensure-sysext.service. Jan 29 16:25:17.803647 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 16:25:17.806302 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:25:17.806543 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:25:17.808823 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:25:17.809051 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:25:17.824754 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:25:17.827952 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:25:17.837594 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:25:17.837984 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:25:17.844110 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:25:17.844361 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:25:17.854222 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 16:25:17.863964 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 16:25:17.897349 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 16:25:17.910020 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 16:25:17.927116 systemd-udevd[1699]: Using default interface naming scheme 'v255'. Jan 29 16:25:17.933736 augenrules[1732]: No rules Jan 29 16:25:17.935543 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 16:25:17.941052 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:25:17.941688 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:25:17.944263 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 16:25:17.993678 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:25:18.008254 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:25:18.084846 systemd-resolved[1696]: Positive Trust Anchors: Jan 29 16:25:18.084863 systemd-resolved[1696]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:25:18.084925 systemd-resolved[1696]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:25:18.095180 systemd-resolved[1696]: Defaulting to hostname 'linux'. Jan 29 16:25:18.098855 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:25:18.100826 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:25:18.158274 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 16:25:18.172726 systemd-networkd[1741]: lo: Link UP Jan 29 16:25:18.172741 systemd-networkd[1741]: lo: Gained carrier Jan 29 16:25:18.174146 systemd-networkd[1741]: Enumeration completed Jan 29 16:25:18.175749 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:25:18.177434 systemd[1]: Reached target network.target - Network. Jan 29 16:25:18.184134 (udev-worker)[1753]: Network interface NamePolicy= disabled on kernel command line. Jan 29 16:25:18.185841 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 29 16:25:18.194861 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 16:25:18.232710 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 29 16:25:18.241450 systemd-networkd[1741]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:25:18.241726 systemd-networkd[1741]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:25:18.246680 systemd-networkd[1741]: eth0: Link UP Jan 29 16:25:18.247382 systemd-networkd[1741]: eth0: Gained carrier Jan 29 16:25:18.248305 systemd-networkd[1741]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:25:18.258738 systemd-networkd[1741]: eth0: DHCPv4 address 172.31.23.123/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 29 16:25:18.275665 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1754) Jan 29 16:25:18.325635 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 29 16:25:18.346649 kernel: ACPI: button: Power Button [PWRF] Jan 29 16:25:18.352640 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jan 29 16:25:18.362658 kernel: ACPI: button: Sleep Button [SLPF] Jan 29 16:25:18.364636 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jan 29 16:25:18.407671 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Jan 29 16:25:18.479015 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 16:25:18.485477 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 29 16:25:18.498463 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 16:25:18.501670 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:25:18.515205 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 16:25:18.524981 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 16:25:18.529867 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 16:25:18.545692 lvm[1859]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:25:18.572725 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 16:25:18.573228 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:25:18.579961 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 16:25:18.585719 lvm[1863]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:25:18.610251 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 16:25:18.748813 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:25:18.750536 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:25:18.752081 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 16:25:18.753765 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 16:25:18.755463 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 16:25:18.756983 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 16:25:18.759383 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 16:25:18.761485 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 16:25:18.761519 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:25:18.762864 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:25:18.779402 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 16:25:18.788392 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 16:25:18.802448 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 29 16:25:18.805211 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 29 16:25:18.808196 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 29 16:25:18.818439 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 16:25:18.820377 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 29 16:25:18.822577 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 16:25:18.824465 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:25:18.825965 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:25:18.827463 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:25:18.827566 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:25:18.834740 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 16:25:18.838373 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 16:25:18.843926 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 16:25:18.858647 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 16:25:18.876594 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 16:25:18.882824 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 16:25:18.887856 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 16:25:18.902747 jq[1873]: false Jan 29 16:25:18.912859 systemd[1]: Started ntpd.service - Network Time Service. Jan 29 16:25:18.923733 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 29 16:25:18.937289 extend-filesystems[1874]: Found loop4 Jan 29 16:25:18.937289 extend-filesystems[1874]: Found loop5 Jan 29 16:25:18.937289 extend-filesystems[1874]: Found loop6 Jan 29 16:25:18.937289 extend-filesystems[1874]: Found loop7 Jan 29 16:25:18.937289 extend-filesystems[1874]: Found nvme0n1 Jan 29 16:25:18.937289 extend-filesystems[1874]: Found nvme0n1p1 Jan 29 16:25:18.937289 extend-filesystems[1874]: Found nvme0n1p2 Jan 29 16:25:18.937289 extend-filesystems[1874]: Found nvme0n1p3 Jan 29 16:25:18.937289 extend-filesystems[1874]: Found usr Jan 29 16:25:18.937289 extend-filesystems[1874]: Found nvme0n1p4 Jan 29 16:25:18.937289 extend-filesystems[1874]: Found nvme0n1p6 Jan 29 16:25:18.937289 extend-filesystems[1874]: Found nvme0n1p7 Jan 29 16:25:18.937289 extend-filesystems[1874]: Found nvme0n1p9 Jan 29 16:25:18.937289 extend-filesystems[1874]: Checking size of /dev/nvme0n1p9 Jan 29 16:25:18.937171 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 16:25:18.948883 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 16:25:18.977089 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 16:25:18.983187 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 16:25:18.983939 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 16:25:18.991877 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 16:25:18.999597 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 16:25:19.012217 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 16:25:19.012486 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 16:25:19.039954 coreos-metadata[1871]: Jan 29 16:25:19.038 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 29 16:25:19.052930 coreos-metadata[1871]: Jan 29 16:25:19.040 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 29 16:25:19.052930 coreos-metadata[1871]: Jan 29 16:25:19.043 INFO Fetch successful Jan 29 16:25:19.052930 coreos-metadata[1871]: Jan 29 16:25:19.043 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 29 16:25:19.052930 coreos-metadata[1871]: Jan 29 16:25:19.050 INFO Fetch successful Jan 29 16:25:19.052930 coreos-metadata[1871]: Jan 29 16:25:19.050 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 29 16:25:19.052930 coreos-metadata[1871]: Jan 29 16:25:19.052 INFO Fetch successful Jan 29 16:25:19.052930 coreos-metadata[1871]: Jan 29 16:25:19.052 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 29 16:25:19.048060 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 16:25:19.063167 extend-filesystems[1874]: Resized partition /dev/nvme0n1p9 Jan 29 16:25:19.046388 dbus-daemon[1872]: [system] SELinux support is enabled Jan 29 16:25:19.072244 coreos-metadata[1871]: Jan 29 16:25:19.053 INFO Fetch successful Jan 29 16:25:19.072244 coreos-metadata[1871]: Jan 29 16:25:19.053 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 29 16:25:19.072244 coreos-metadata[1871]: Jan 29 16:25:19.053 INFO Fetch failed with 404: resource not found Jan 29 16:25:19.072244 coreos-metadata[1871]: Jan 29 16:25:19.053 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 29 16:25:19.072244 coreos-metadata[1871]: Jan 29 16:25:19.056 INFO Fetch successful Jan 29 16:25:19.072244 coreos-metadata[1871]: Jan 29 16:25:19.056 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 29 16:25:19.072244 coreos-metadata[1871]: Jan 29 16:25:19.056 INFO Fetch successful Jan 29 16:25:19.072244 coreos-metadata[1871]: Jan 29 16:25:19.056 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 29 16:25:19.072244 coreos-metadata[1871]: Jan 29 16:25:19.057 INFO Fetch successful Jan 29 16:25:19.072244 coreos-metadata[1871]: Jan 29 16:25:19.057 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 29 16:25:19.072244 coreos-metadata[1871]: Jan 29 16:25:19.058 INFO Fetch successful Jan 29 16:25:19.072244 coreos-metadata[1871]: Jan 29 16:25:19.058 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 29 16:25:19.072244 coreos-metadata[1871]: Jan 29 16:25:19.060 INFO Fetch successful Jan 29 16:25:19.057918 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 16:25:19.048661 dbus-daemon[1872]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1741 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 29 16:25:19.072927 jq[1889]: true Jan 29 16:25:19.058689 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 16:25:19.063692 dbus-daemon[1872]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 29 16:25:19.062767 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 16:25:19.063314 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 16:25:19.070546 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 16:25:19.070576 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 16:25:19.098050 ntpd[1877]: 29 Jan 16:25:19 ntpd[1877]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 14:24:41 UTC 2025 (1): Starting Jan 29 16:25:19.098050 ntpd[1877]: 29 Jan 16:25:19 ntpd[1877]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 29 16:25:19.098050 ntpd[1877]: 29 Jan 16:25:19 ntpd[1877]: ---------------------------------------------------- Jan 29 16:25:19.098050 ntpd[1877]: 29 Jan 16:25:19 ntpd[1877]: ntp-4 is maintained by Network Time Foundation, Jan 29 16:25:19.098050 ntpd[1877]: 29 Jan 16:25:19 ntpd[1877]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 29 16:25:19.098050 ntpd[1877]: 29 Jan 16:25:19 ntpd[1877]: corporation. Support and training for ntp-4 are Jan 29 16:25:19.098050 ntpd[1877]: 29 Jan 16:25:19 ntpd[1877]: available at https://www.nwtime.org/support Jan 29 16:25:19.098050 ntpd[1877]: 29 Jan 16:25:19 ntpd[1877]: ---------------------------------------------------- Jan 29 16:25:19.097196 ntpd[1877]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 14:24:41 UTC 2025 (1): Starting Jan 29 16:25:19.095880 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 29 16:25:19.102005 extend-filesystems[1904]: resize2fs 1.47.1 (20-May-2024) Jan 29 16:25:19.113480 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 29 16:25:19.097222 ntpd[1877]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 29 16:25:19.097233 ntpd[1877]: ---------------------------------------------------- Jan 29 16:25:19.097244 ntpd[1877]: ntp-4 is maintained by Network Time Foundation, Jan 29 16:25:19.097253 ntpd[1877]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 29 16:25:19.097262 ntpd[1877]: corporation. Support and training for ntp-4 are Jan 29 16:25:19.097271 ntpd[1877]: available at https://www.nwtime.org/support Jan 29 16:25:19.097280 ntpd[1877]: ---------------------------------------------------- Jan 29 16:25:19.114461 ntpd[1877]: proto: precision = 0.105 usec (-23) Jan 29 16:25:19.118430 ntpd[1877]: 29 Jan 16:25:19 ntpd[1877]: proto: precision = 0.105 usec (-23) Jan 29 16:25:19.118430 ntpd[1877]: 29 Jan 16:25:19 ntpd[1877]: basedate set to 2025-01-17 Jan 29 16:25:19.118430 ntpd[1877]: 29 Jan 16:25:19 ntpd[1877]: gps base set to 2025-01-19 (week 2350) Jan 29 16:25:19.114818 ntpd[1877]: basedate set to 2025-01-17 Jan 29 16:25:19.114833 ntpd[1877]: gps base set to 2025-01-19 (week 2350) Jan 29 16:25:19.127023 ntpd[1877]: Listen and drop on 0 v6wildcard [::]:123 Jan 29 16:25:19.127790 ntpd[1877]: 29 Jan 16:25:19 ntpd[1877]: Listen and drop on 0 v6wildcard [::]:123 Jan 29 16:25:19.127790 ntpd[1877]: 29 Jan 16:25:19 ntpd[1877]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 29 16:25:19.127790 ntpd[1877]: 29 Jan 16:25:19 ntpd[1877]: Listen normally on 2 lo 127.0.0.1:123 Jan 29 16:25:19.127790 ntpd[1877]: 29 Jan 16:25:19 ntpd[1877]: Listen normally on 3 eth0 172.31.23.123:123 Jan 29 16:25:19.127790 ntpd[1877]: 29 Jan 16:25:19 ntpd[1877]: Listen normally on 4 lo [::1]:123 Jan 29 16:25:19.127790 ntpd[1877]: 29 Jan 16:25:19 ntpd[1877]: bind(21) AF_INET6 fe80::406:8ff:fe2a:d1b7%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 16:25:19.127790 ntpd[1877]: 29 Jan 16:25:19 ntpd[1877]: unable to create socket on eth0 (5) for fe80::406:8ff:fe2a:d1b7%2#123 Jan 29 16:25:19.127790 ntpd[1877]: 29 Jan 16:25:19 ntpd[1877]: failed to init interface for address fe80::406:8ff:fe2a:d1b7%2 Jan 29 16:25:19.127790 ntpd[1877]: 29 Jan 16:25:19 ntpd[1877]: Listening on routing socket on fd #21 for interface updates Jan 29 16:25:19.127090 ntpd[1877]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 29 16:25:19.127280 ntpd[1877]: Listen normally on 2 lo 127.0.0.1:123 Jan 29 16:25:19.127315 ntpd[1877]: Listen normally on 3 eth0 172.31.23.123:123 Jan 29 16:25:19.127368 ntpd[1877]: Listen normally on 4 lo [::1]:123 Jan 29 16:25:19.127415 ntpd[1877]: bind(21) AF_INET6 fe80::406:8ff:fe2a:d1b7%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 16:25:19.127435 ntpd[1877]: unable to create socket on eth0 (5) for fe80::406:8ff:fe2a:d1b7%2#123 Jan 29 16:25:19.127449 ntpd[1877]: failed to init interface for address fe80::406:8ff:fe2a:d1b7%2 Jan 29 16:25:19.127480 ntpd[1877]: Listening on routing socket on fd #21 for interface updates Jan 29 16:25:19.140043 ntpd[1877]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 16:25:19.140086 ntpd[1877]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 16:25:19.140239 ntpd[1877]: 29 Jan 16:25:19 ntpd[1877]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 16:25:19.140239 ntpd[1877]: 29 Jan 16:25:19 ntpd[1877]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 16:25:19.146702 (ntainerd)[1909]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 16:25:19.183834 update_engine[1886]: I20250129 16:25:19.179575 1886 main.cc:92] Flatcar Update Engine starting Jan 29 16:25:19.187350 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 16:25:19.189328 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 16:25:19.191854 jq[1901]: true Jan 29 16:25:19.207347 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 16:25:19.208274 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 16:25:19.213195 systemd[1]: Started update-engine.service - Update Engine. Jan 29 16:25:19.219761 update_engine[1886]: I20250129 16:25:19.215888 1886 update_check_scheduler.cc:74] Next update check in 9m50s Jan 29 16:25:19.223087 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 16:25:19.229241 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 29 16:25:19.247690 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 29 16:25:19.265248 extend-filesystems[1904]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 29 16:25:19.265248 extend-filesystems[1904]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 16:25:19.265248 extend-filesystems[1904]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 29 16:25:19.272046 extend-filesystems[1874]: Resized filesystem in /dev/nvme0n1p9 Jan 29 16:25:19.268085 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 16:25:19.270322 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 16:25:19.319952 systemd-logind[1885]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 16:25:19.319989 systemd-logind[1885]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 29 16:25:19.320012 systemd-logind[1885]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 16:25:19.321815 systemd-logind[1885]: New seat seat0. Jan 29 16:25:19.326108 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 16:25:19.388158 bash[1950]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:25:19.389147 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 16:25:19.395954 systemd[1]: Starting sshkeys.service... Jan 29 16:25:19.440958 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1754) Jan 29 16:25:19.493284 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 16:25:19.502075 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 16:25:19.541166 dbus-daemon[1872]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 29 16:25:19.541620 dbus-daemon[1872]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1907 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 29 16:25:19.548125 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 29 16:25:19.562073 systemd[1]: Starting polkit.service - Authorization Manager... Jan 29 16:25:19.615789 systemd-networkd[1741]: eth0: Gained IPv6LL Jan 29 16:25:19.621555 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 16:25:19.624421 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 16:25:19.634990 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 29 16:25:19.645991 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:25:19.655803 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 16:25:19.709717 polkitd[1994]: Started polkitd version 121 Jan 29 16:25:19.744456 polkitd[1994]: Loading rules from directory /etc/polkit-1/rules.d Jan 29 16:25:19.744548 polkitd[1994]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 29 16:25:19.752552 coreos-metadata[1963]: Jan 29 16:25:19.752 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 29 16:25:19.759734 polkitd[1994]: Finished loading, compiling and executing 2 rules Jan 29 16:25:19.770776 coreos-metadata[1963]: Jan 29 16:25:19.770 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 29 16:25:19.772462 dbus-daemon[1872]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 29 16:25:19.774163 coreos-metadata[1963]: Jan 29 16:25:19.774 INFO Fetch successful Jan 29 16:25:19.774245 coreos-metadata[1963]: Jan 29 16:25:19.774 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 29 16:25:19.776095 systemd[1]: Started polkit.service - Authorization Manager. Jan 29 16:25:19.781226 coreos-metadata[1963]: Jan 29 16:25:19.781 INFO Fetch successful Jan 29 16:25:19.783572 polkitd[1994]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 29 16:25:19.788427 unknown[1963]: wrote ssh authorized keys file for user: core Jan 29 16:25:19.850533 amazon-ssm-agent[2013]: Initializing new seelog logger Jan 29 16:25:19.853175 amazon-ssm-agent[2013]: New Seelog Logger Creation Complete Jan 29 16:25:19.853175 amazon-ssm-agent[2013]: 2025/01/29 16:25:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 16:25:19.853175 amazon-ssm-agent[2013]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 16:25:19.853973 amazon-ssm-agent[2013]: 2025/01/29 16:25:19 processing appconfig overrides Jan 29 16:25:19.855307 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 16:25:19.856638 amazon-ssm-agent[2013]: 2025/01/29 16:25:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 16:25:19.856638 amazon-ssm-agent[2013]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 16:25:19.856638 amazon-ssm-agent[2013]: 2025/01/29 16:25:19 processing appconfig overrides Jan 29 16:25:19.857315 update-ssh-keys[2064]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:25:19.859731 amazon-ssm-agent[2013]: 2025/01/29 16:25:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 16:25:19.859731 amazon-ssm-agent[2013]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 16:25:19.859731 amazon-ssm-agent[2013]: 2025/01/29 16:25:19 processing appconfig overrides Jan 29 16:25:19.859731 amazon-ssm-agent[2013]: 2025-01-29 16:25:19 INFO Proxy environment variables: Jan 29 16:25:19.859941 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 16:25:19.864346 amazon-ssm-agent[2013]: 2025/01/29 16:25:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 16:25:19.864459 amazon-ssm-agent[2013]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 16:25:19.865657 amazon-ssm-agent[2013]: 2025/01/29 16:25:19 processing appconfig overrides Jan 29 16:25:19.869123 systemd[1]: Finished sshkeys.service. Jan 29 16:25:19.880878 systemd-hostnamed[1907]: Hostname set to (transient) Jan 29 16:25:19.881002 systemd-resolved[1696]: System hostname changed to 'ip-172-31-23-123'. Jan 29 16:25:19.950783 locksmithd[1933]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 16:25:19.962636 amazon-ssm-agent[2013]: 2025-01-29 16:25:19 INFO https_proxy: Jan 29 16:25:19.995821 containerd[1909]: time="2025-01-29T16:25:19.995722219Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 16:25:20.062751 amazon-ssm-agent[2013]: 2025-01-29 16:25:19 INFO http_proxy: Jan 29 16:25:20.119290 containerd[1909]: time="2025-01-29T16:25:20.119132900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:25:20.128873 containerd[1909]: time="2025-01-29T16:25:20.128801302Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:25:20.128873 containerd[1909]: time="2025-01-29T16:25:20.128858406Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 16:25:20.129018 containerd[1909]: time="2025-01-29T16:25:20.128893830Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 16:25:20.129654 containerd[1909]: time="2025-01-29T16:25:20.129077517Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 16:25:20.129654 containerd[1909]: time="2025-01-29T16:25:20.129103920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 16:25:20.129654 containerd[1909]: time="2025-01-29T16:25:20.129188436Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:25:20.129654 containerd[1909]: time="2025-01-29T16:25:20.129205196Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:25:20.129654 containerd[1909]: time="2025-01-29T16:25:20.129485104Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:25:20.129654 containerd[1909]: time="2025-01-29T16:25:20.129520048Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 16:25:20.129654 containerd[1909]: time="2025-01-29T16:25:20.129539945Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:25:20.129654 containerd[1909]: time="2025-01-29T16:25:20.129553413Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 16:25:20.129654 containerd[1909]: time="2025-01-29T16:25:20.129654621Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:25:20.129994 containerd[1909]: time="2025-01-29T16:25:20.129910694Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:25:20.130130 containerd[1909]: time="2025-01-29T16:25:20.130102690Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:25:20.130172 containerd[1909]: time="2025-01-29T16:25:20.130133571Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 16:25:20.130255 containerd[1909]: time="2025-01-29T16:25:20.130236795Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 16:25:20.130318 containerd[1909]: time="2025-01-29T16:25:20.130300660Z" level=info msg="metadata content store policy set" policy=shared Jan 29 16:25:20.141067 sshd_keygen[1920]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 16:25:20.141425 containerd[1909]: time="2025-01-29T16:25:20.141392886Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 16:25:20.141514 containerd[1909]: time="2025-01-29T16:25:20.141494774Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 16:25:20.141588 containerd[1909]: time="2025-01-29T16:25:20.141525219Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 16:25:20.141654 containerd[1909]: time="2025-01-29T16:25:20.141618687Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 16:25:20.141654 containerd[1909]: time="2025-01-29T16:25:20.141642208Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 16:25:20.141833 containerd[1909]: time="2025-01-29T16:25:20.141812428Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 16:25:20.144638 containerd[1909]: time="2025-01-29T16:25:20.143818544Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 16:25:20.144638 containerd[1909]: time="2025-01-29T16:25:20.143975853Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 16:25:20.144638 containerd[1909]: time="2025-01-29T16:25:20.144000374Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 16:25:20.144638 containerd[1909]: time="2025-01-29T16:25:20.144023683Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 16:25:20.144638 containerd[1909]: time="2025-01-29T16:25:20.144046575Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 16:25:20.144638 containerd[1909]: time="2025-01-29T16:25:20.144071605Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 16:25:20.144638 containerd[1909]: time="2025-01-29T16:25:20.144093123Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 16:25:20.144638 containerd[1909]: time="2025-01-29T16:25:20.144116057Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 16:25:20.144638 containerd[1909]: time="2025-01-29T16:25:20.144143783Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 16:25:20.144638 containerd[1909]: time="2025-01-29T16:25:20.144166592Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 16:25:20.144638 containerd[1909]: time="2025-01-29T16:25:20.144184623Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 16:25:20.144638 containerd[1909]: time="2025-01-29T16:25:20.144201447Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 16:25:20.144638 containerd[1909]: time="2025-01-29T16:25:20.144231784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 16:25:20.144638 containerd[1909]: time="2025-01-29T16:25:20.144252291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 16:25:20.145160 containerd[1909]: time="2025-01-29T16:25:20.144273042Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 16:25:20.145160 containerd[1909]: time="2025-01-29T16:25:20.144294565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 16:25:20.145160 containerd[1909]: time="2025-01-29T16:25:20.144314603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 16:25:20.145160 containerd[1909]: time="2025-01-29T16:25:20.144335568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 16:25:20.145160 containerd[1909]: time="2025-01-29T16:25:20.144364145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 16:25:20.145160 containerd[1909]: time="2025-01-29T16:25:20.144383500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 16:25:20.145160 containerd[1909]: time="2025-01-29T16:25:20.144403934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 16:25:20.145160 containerd[1909]: time="2025-01-29T16:25:20.144428046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 16:25:20.145160 containerd[1909]: time="2025-01-29T16:25:20.144448340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 16:25:20.145160 containerd[1909]: time="2025-01-29T16:25:20.144465436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 16:25:20.145160 containerd[1909]: time="2025-01-29T16:25:20.144482258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 16:25:20.145160 containerd[1909]: time="2025-01-29T16:25:20.144502239Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 16:25:20.145160 containerd[1909]: time="2025-01-29T16:25:20.144530715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 16:25:20.145160 containerd[1909]: time="2025-01-29T16:25:20.144549726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 16:25:20.145160 containerd[1909]: time="2025-01-29T16:25:20.144567054Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 16:25:20.145678 containerd[1909]: time="2025-01-29T16:25:20.144676342Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 16:25:20.145678 containerd[1909]: time="2025-01-29T16:25:20.144704274Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 16:25:20.145678 containerd[1909]: time="2025-01-29T16:25:20.144784205Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 16:25:20.145678 containerd[1909]: time="2025-01-29T16:25:20.144805047Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 16:25:20.145678 containerd[1909]: time="2025-01-29T16:25:20.144818547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 16:25:20.145678 containerd[1909]: time="2025-01-29T16:25:20.144837021Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 16:25:20.145678 containerd[1909]: time="2025-01-29T16:25:20.144854943Z" level=info msg="NRI interface is disabled by configuration." Jan 29 16:25:20.145678 containerd[1909]: time="2025-01-29T16:25:20.144870082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 16:25:20.145953 containerd[1909]: time="2025-01-29T16:25:20.145297783Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 16:25:20.145953 containerd[1909]: time="2025-01-29T16:25:20.145362987Z" level=info msg="Connect containerd service" Jan 29 16:25:20.145953 containerd[1909]: time="2025-01-29T16:25:20.145400180Z" level=info msg="using legacy CRI server" Jan 29 16:25:20.145953 containerd[1909]: time="2025-01-29T16:25:20.145410459Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 16:25:20.145953 containerd[1909]: time="2025-01-29T16:25:20.145575643Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 16:25:20.150637 containerd[1909]: time="2025-01-29T16:25:20.147991158Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:25:20.150637 containerd[1909]: time="2025-01-29T16:25:20.148114215Z" level=info msg="Start subscribing containerd event" Jan 29 16:25:20.150637 containerd[1909]: time="2025-01-29T16:25:20.148161933Z" level=info msg="Start recovering state" Jan 29 16:25:20.150637 containerd[1909]: time="2025-01-29T16:25:20.148240030Z" level=info msg="Start event monitor" Jan 29 16:25:20.150637 containerd[1909]: time="2025-01-29T16:25:20.148259045Z" level=info msg="Start snapshots syncer" Jan 29 16:25:20.150637 containerd[1909]: time="2025-01-29T16:25:20.148272120Z" level=info msg="Start cni network conf syncer for default" Jan 29 16:25:20.150637 containerd[1909]: time="2025-01-29T16:25:20.148284394Z" level=info msg="Start streaming server" Jan 29 16:25:20.150637 containerd[1909]: time="2025-01-29T16:25:20.150068034Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 16:25:20.150637 containerd[1909]: time="2025-01-29T16:25:20.150134558Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 16:25:20.150637 containerd[1909]: time="2025-01-29T16:25:20.150212870Z" level=info msg="containerd successfully booted in 0.156061s" Jan 29 16:25:20.151457 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 16:25:20.162827 amazon-ssm-agent[2013]: 2025-01-29 16:25:19 INFO no_proxy: Jan 29 16:25:20.204940 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 16:25:20.219935 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 16:25:20.228152 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 16:25:20.229142 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 16:25:20.239564 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 16:25:20.256710 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 16:25:20.262635 amazon-ssm-agent[2013]: 2025-01-29 16:25:19 INFO Checking if agent identity type OnPrem can be assumed Jan 29 16:25:20.266965 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 16:25:20.271996 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 16:25:20.275450 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 16:25:20.359908 amazon-ssm-agent[2013]: 2025-01-29 16:25:19 INFO Checking if agent identity type EC2 can be assumed Jan 29 16:25:20.458487 amazon-ssm-agent[2013]: 2025-01-29 16:25:20 INFO Agent will take identity from EC2 Jan 29 16:25:20.519380 amazon-ssm-agent[2013]: 2025-01-29 16:25:20 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 16:25:20.519380 amazon-ssm-agent[2013]: 2025-01-29 16:25:20 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 16:25:20.519380 amazon-ssm-agent[2013]: 2025-01-29 16:25:20 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 16:25:20.519380 amazon-ssm-agent[2013]: 2025-01-29 16:25:20 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 29 16:25:20.519380 amazon-ssm-agent[2013]: 2025-01-29 16:25:20 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 29 16:25:20.519380 amazon-ssm-agent[2013]: 2025-01-29 16:25:20 INFO [amazon-ssm-agent] Starting Core Agent Jan 29 16:25:20.519380 amazon-ssm-agent[2013]: 2025-01-29 16:25:20 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 29 16:25:20.519380 amazon-ssm-agent[2013]: 2025-01-29 16:25:20 INFO [Registrar] Starting registrar module Jan 29 16:25:20.519803 amazon-ssm-agent[2013]: 2025-01-29 16:25:20 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 29 16:25:20.519803 amazon-ssm-agent[2013]: 2025-01-29 16:25:20 INFO [EC2Identity] EC2 registration was successful. Jan 29 16:25:20.519803 amazon-ssm-agent[2013]: 2025-01-29 16:25:20 INFO [CredentialRefresher] credentialRefresher has started Jan 29 16:25:20.519803 amazon-ssm-agent[2013]: 2025-01-29 16:25:20 INFO [CredentialRefresher] Starting credentials refresher loop Jan 29 16:25:20.519803 amazon-ssm-agent[2013]: 2025-01-29 16:25:20 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 29 16:25:20.557276 amazon-ssm-agent[2013]: 2025-01-29 16:25:20 INFO [CredentialRefresher] Next credential rotation will be in 32.24165989355 minutes Jan 29 16:25:21.357787 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:25:21.360360 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 16:25:21.362312 systemd[1]: Startup finished in 806ms (kernel) + 7.584s (initrd) + 7.111s (userspace) = 15.502s. Jan 29 16:25:21.536970 amazon-ssm-agent[2013]: 2025-01-29 16:25:21 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 29 16:25:21.571770 (kubelet)[2111]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:25:21.638125 amazon-ssm-agent[2013]: 2025-01-29 16:25:21 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2114) started Jan 29 16:25:21.680428 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 16:25:21.688383 systemd[1]: Started sshd@0-172.31.23.123:22-139.178.68.195:45260.service - OpenSSH per-connection server daemon (139.178.68.195:45260). Jan 29 16:25:21.748074 amazon-ssm-agent[2013]: 2025-01-29 16:25:21 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 29 16:25:21.936258 sshd[2125]: Accepted publickey for core from 139.178.68.195 port 45260 ssh2: RSA SHA256:vbUkVCMrs4gTXRs1sqC0OSq8ZkfijT0hqZd53m264iQ Jan 29 16:25:21.937250 sshd-session[2125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:21.956896 systemd-logind[1885]: New session 1 of user core. Jan 29 16:25:21.959845 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 16:25:21.969142 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 16:25:22.019158 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 16:25:22.028061 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 16:25:22.043205 (systemd)[2136]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 16:25:22.047145 systemd-logind[1885]: New session c1 of user core. Jan 29 16:25:22.097663 ntpd[1877]: Listen normally on 6 eth0 [fe80::406:8ff:fe2a:d1b7%2]:123 Jan 29 16:25:22.098158 ntpd[1877]: 29 Jan 16:25:22 ntpd[1877]: Listen normally on 6 eth0 [fe80::406:8ff:fe2a:d1b7%2]:123 Jan 29 16:25:22.298307 systemd[2136]: Queued start job for default target default.target. Jan 29 16:25:22.304240 systemd[2136]: Created slice app.slice - User Application Slice. Jan 29 16:25:22.304280 systemd[2136]: Reached target paths.target - Paths. Jan 29 16:25:22.304334 systemd[2136]: Reached target timers.target - Timers. Jan 29 16:25:22.306559 systemd[2136]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 16:25:22.332793 systemd[2136]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 16:25:22.332944 systemd[2136]: Reached target sockets.target - Sockets. Jan 29 16:25:22.333005 systemd[2136]: Reached target basic.target - Basic System. Jan 29 16:25:22.333168 systemd[2136]: Reached target default.target - Main User Target. Jan 29 16:25:22.333214 systemd[2136]: Startup finished in 273ms. Jan 29 16:25:22.333274 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 16:25:22.340915 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 16:25:22.510384 systemd[1]: Started sshd@1-172.31.23.123:22-139.178.68.195:45270.service - OpenSSH per-connection server daemon (139.178.68.195:45270). Jan 29 16:25:22.704843 sshd[2148]: Accepted publickey for core from 139.178.68.195 port 45270 ssh2: RSA SHA256:vbUkVCMrs4gTXRs1sqC0OSq8ZkfijT0hqZd53m264iQ Jan 29 16:25:22.708493 sshd-session[2148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:22.718947 systemd-logind[1885]: New session 2 of user core. Jan 29 16:25:22.722815 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 16:25:22.832749 kubelet[2111]: E0129 16:25:22.832652 2111 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:25:22.835390 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:25:22.835643 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:25:22.835972 systemd[1]: kubelet.service: Consumed 1.015s CPU time, 241M memory peak. Jan 29 16:25:22.848595 sshd[2151]: Connection closed by 139.178.68.195 port 45270 Jan 29 16:25:22.849390 sshd-session[2148]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:22.853704 systemd[1]: sshd@1-172.31.23.123:22-139.178.68.195:45270.service: Deactivated successfully. Jan 29 16:25:22.863956 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 16:25:22.865545 systemd-logind[1885]: Session 2 logged out. Waiting for processes to exit. Jan 29 16:25:22.866868 systemd-logind[1885]: Removed session 2. Jan 29 16:25:22.888982 systemd[1]: Started sshd@2-172.31.23.123:22-139.178.68.195:45286.service - OpenSSH per-connection server daemon (139.178.68.195:45286). Jan 29 16:25:23.051290 sshd[2159]: Accepted publickey for core from 139.178.68.195 port 45286 ssh2: RSA SHA256:vbUkVCMrs4gTXRs1sqC0OSq8ZkfijT0hqZd53m264iQ Jan 29 16:25:23.052475 sshd-session[2159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:23.059738 systemd-logind[1885]: New session 3 of user core. Jan 29 16:25:23.066850 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 16:25:23.180061 sshd[2161]: Connection closed by 139.178.68.195 port 45286 Jan 29 16:25:23.180828 sshd-session[2159]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:23.185313 systemd[1]: sshd@2-172.31.23.123:22-139.178.68.195:45286.service: Deactivated successfully. Jan 29 16:25:23.187729 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 16:25:23.189130 systemd-logind[1885]: Session 3 logged out. Waiting for processes to exit. Jan 29 16:25:23.190384 systemd-logind[1885]: Removed session 3. Jan 29 16:25:23.216952 systemd[1]: Started sshd@3-172.31.23.123:22-139.178.68.195:45300.service - OpenSSH per-connection server daemon (139.178.68.195:45300). Jan 29 16:25:23.388930 sshd[2167]: Accepted publickey for core from 139.178.68.195 port 45300 ssh2: RSA SHA256:vbUkVCMrs4gTXRs1sqC0OSq8ZkfijT0hqZd53m264iQ Jan 29 16:25:23.392085 sshd-session[2167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:23.401351 systemd-logind[1885]: New session 4 of user core. Jan 29 16:25:23.411829 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 16:25:23.531544 sshd[2169]: Connection closed by 139.178.68.195 port 45300 Jan 29 16:25:23.532245 sshd-session[2167]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:23.546860 systemd[1]: sshd@3-172.31.23.123:22-139.178.68.195:45300.service: Deactivated successfully. Jan 29 16:25:23.552122 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 16:25:23.553171 systemd-logind[1885]: Session 4 logged out. Waiting for processes to exit. Jan 29 16:25:23.564392 systemd-logind[1885]: Removed session 4. Jan 29 16:25:23.575639 systemd[1]: Started sshd@4-172.31.23.123:22-139.178.68.195:45304.service - OpenSSH per-connection server daemon (139.178.68.195:45304). Jan 29 16:25:23.744634 sshd[2174]: Accepted publickey for core from 139.178.68.195 port 45304 ssh2: RSA SHA256:vbUkVCMrs4gTXRs1sqC0OSq8ZkfijT0hqZd53m264iQ Jan 29 16:25:23.746236 sshd-session[2174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:23.753840 systemd-logind[1885]: New session 5 of user core. Jan 29 16:25:23.760819 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 16:25:23.872642 sudo[2178]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 16:25:23.873276 sudo[2178]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:25:23.888824 sudo[2178]: pam_unix(sudo:session): session closed for user root Jan 29 16:25:23.911851 sshd[2177]: Connection closed by 139.178.68.195 port 45304 Jan 29 16:25:23.912889 sshd-session[2174]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:23.916453 systemd[1]: sshd@4-172.31.23.123:22-139.178.68.195:45304.service: Deactivated successfully. Jan 29 16:25:23.918449 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 16:25:23.920023 systemd-logind[1885]: Session 5 logged out. Waiting for processes to exit. Jan 29 16:25:23.921243 systemd-logind[1885]: Removed session 5. Jan 29 16:25:23.952056 systemd[1]: Started sshd@5-172.31.23.123:22-139.178.68.195:45318.service - OpenSSH per-connection server daemon (139.178.68.195:45318). Jan 29 16:25:24.119627 sshd[2184]: Accepted publickey for core from 139.178.68.195 port 45318 ssh2: RSA SHA256:vbUkVCMrs4gTXRs1sqC0OSq8ZkfijT0hqZd53m264iQ Jan 29 16:25:24.121106 sshd-session[2184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:24.126949 systemd-logind[1885]: New session 6 of user core. Jan 29 16:25:24.136287 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 16:25:24.232414 sudo[2188]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 16:25:24.233069 sudo[2188]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:25:24.237517 sudo[2188]: pam_unix(sudo:session): session closed for user root Jan 29 16:25:24.244302 sudo[2187]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 16:25:24.244954 sudo[2187]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:25:24.275324 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:25:24.314288 augenrules[2210]: No rules Jan 29 16:25:24.315917 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:25:24.316185 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:25:24.317386 sudo[2187]: pam_unix(sudo:session): session closed for user root Jan 29 16:25:24.339958 sshd[2186]: Connection closed by 139.178.68.195 port 45318 Jan 29 16:25:24.340766 sshd-session[2184]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:24.346193 systemd[1]: sshd@5-172.31.23.123:22-139.178.68.195:45318.service: Deactivated successfully. Jan 29 16:25:24.348380 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 16:25:24.349295 systemd-logind[1885]: Session 6 logged out. Waiting for processes to exit. Jan 29 16:25:24.351162 systemd-logind[1885]: Removed session 6. Jan 29 16:25:24.380983 systemd[1]: Started sshd@6-172.31.23.123:22-139.178.68.195:45326.service - OpenSSH per-connection server daemon (139.178.68.195:45326). Jan 29 16:25:24.547652 sshd[2219]: Accepted publickey for core from 139.178.68.195 port 45326 ssh2: RSA SHA256:vbUkVCMrs4gTXRs1sqC0OSq8ZkfijT0hqZd53m264iQ Jan 29 16:25:24.548766 sshd-session[2219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:25:24.566782 systemd-logind[1885]: New session 7 of user core. Jan 29 16:25:24.571819 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 16:25:24.668180 sudo[2222]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 16:25:24.668564 sudo[2222]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:25:25.759388 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:25:25.759973 systemd[1]: kubelet.service: Consumed 1.015s CPU time, 241M memory peak. Jan 29 16:25:25.779050 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:25:25.817280 systemd[1]: Reload requested from client PID 2258 ('systemctl') (unit session-7.scope)... Jan 29 16:25:25.817301 systemd[1]: Reloading... Jan 29 16:25:25.991641 zram_generator::config[2306]: No configuration found. Jan 29 16:25:25.934076 systemd-resolved[1696]: Clock change detected. Flushing caches. Jan 29 16:25:25.955688 systemd-journald[1456]: Time jumped backwards, rotating. Jan 29 16:25:26.008346 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:25:26.136423 systemd[1]: Reloading finished in 482 ms. Jan 29 16:25:26.197757 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:25:26.201275 (kubelet)[2355]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:25:26.208256 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:25:26.209048 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:25:26.209311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:25:26.209370 systemd[1]: kubelet.service: Consumed 125ms CPU time, 85M memory peak. Jan 29 16:25:26.218032 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:25:26.460152 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:25:26.476091 (kubelet)[2371]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:25:26.534137 kubelet[2371]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:25:26.534137 kubelet[2371]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:25:26.534137 kubelet[2371]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:25:26.536462 kubelet[2371]: I0129 16:25:26.536411 2371 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:25:26.912912 kubelet[2371]: I0129 16:25:26.912861 2371 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 16:25:26.912912 kubelet[2371]: I0129 16:25:26.912893 2371 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:25:26.913166 kubelet[2371]: I0129 16:25:26.913144 2371 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 16:25:26.962393 kubelet[2371]: I0129 16:25:26.962080 2371 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:25:26.982405 kubelet[2371]: I0129 16:25:26.982380 2371 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:25:26.988632 kubelet[2371]: I0129 16:25:26.988580 2371 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:25:26.989395 kubelet[2371]: I0129 16:25:26.988772 2371 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.23.123","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 16:25:26.994464 kubelet[2371]: I0129 16:25:26.992124 2371 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:25:26.994464 kubelet[2371]: I0129 16:25:26.992169 2371 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 16:25:26.994464 kubelet[2371]: I0129 16:25:26.993538 2371 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:25:26.996600 kubelet[2371]: I0129 16:25:26.995862 2371 kubelet.go:400] "Attempting to sync node with API server" Jan 29 16:25:26.996600 kubelet[2371]: I0129 16:25:26.995892 2371 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:25:26.996600 kubelet[2371]: I0129 16:25:26.995985 2371 kubelet.go:312] "Adding apiserver pod source" Jan 29 16:25:26.996600 kubelet[2371]: I0129 16:25:26.996078 2371 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:25:27.007920 kubelet[2371]: E0129 16:25:27.006944 2371 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:25:27.008182 kubelet[2371]: E0129 16:25:27.008163 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:25:27.010159 kubelet[2371]: I0129 16:25:27.009913 2371 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:25:27.011961 kubelet[2371]: I0129 16:25:27.011881 2371 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:25:27.012220 kubelet[2371]: W0129 16:25:27.012074 2371 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 16:25:27.016967 kubelet[2371]: I0129 16:25:27.016724 2371 server.go:1264] "Started kubelet" Jan 29 16:25:27.021110 kubelet[2371]: I0129 16:25:27.020465 2371 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:25:27.033784 kubelet[2371]: I0129 16:25:27.033725 2371 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:25:27.036598 kubelet[2371]: I0129 16:25:27.035633 2371 server.go:455] "Adding debug handlers to kubelet server" Jan 29 16:25:27.037473 kubelet[2371]: I0129 16:25:27.037415 2371 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:25:27.037714 kubelet[2371]: I0129 16:25:27.037696 2371 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:25:27.037902 kubelet[2371]: I0129 16:25:27.037889 2371 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 16:25:27.038686 kubelet[2371]: I0129 16:25:27.038671 2371 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 16:25:27.038841 kubelet[2371]: I0129 16:25:27.038820 2371 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:25:27.042121 kubelet[2371]: I0129 16:25:27.042103 2371 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:25:27.042777 kubelet[2371]: I0129 16:25:27.042711 2371 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:25:27.049655 kubelet[2371]: I0129 16:25:27.049636 2371 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:25:27.069932 kubelet[2371]: E0129 16:25:27.069902 2371 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:25:27.103614 kubelet[2371]: I0129 16:25:27.103481 2371 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 16:25:27.103614 kubelet[2371]: I0129 16:25:27.103501 2371 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 16:25:27.103614 kubelet[2371]: I0129 16:25:27.103521 2371 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:25:27.110787 kubelet[2371]: E0129 16:25:27.110155 2371 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.23.123\" not found" node="172.31.23.123" Jan 29 16:25:27.111658 kubelet[2371]: I0129 16:25:27.111356 2371 policy_none.go:49] "None policy: Start" Jan 29 16:25:27.115543 kubelet[2371]: I0129 16:25:27.114031 2371 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:25:27.115543 kubelet[2371]: I0129 16:25:27.114060 2371 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:25:27.133076 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 16:25:27.140443 kubelet[2371]: I0129 16:25:27.139515 2371 kubelet_node_status.go:73] "Attempting to register node" node="172.31.23.123" Jan 29 16:25:27.151043 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 16:25:27.160731 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 16:25:27.166083 kubelet[2371]: W0129 16:25:27.165959 2371 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/cpuset.cpus.effective": read /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/cpuset.cpus.effective: no such device Jan 29 16:25:27.169478 kubelet[2371]: I0129 16:25:27.168535 2371 kubelet_node_status.go:76] "Successfully registered node" node="172.31.23.123" Jan 29 16:25:27.170736 kubelet[2371]: I0129 16:25:27.170684 2371 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:25:27.173051 kubelet[2371]: I0129 16:25:27.170933 2371 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:25:27.173051 kubelet[2371]: I0129 16:25:27.171054 2371 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:25:27.181488 kubelet[2371]: I0129 16:25:27.181460 2371 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:25:27.183526 kubelet[2371]: I0129 16:25:27.183500 2371 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:25:27.183526 kubelet[2371]: I0129 16:25:27.183665 2371 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:25:27.183526 kubelet[2371]: I0129 16:25:27.183687 2371 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 16:25:27.183526 kubelet[2371]: E0129 16:25:27.183786 2371 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 29 16:25:27.205034 kubelet[2371]: I0129 16:25:27.205013 2371 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 29 16:25:27.205908 containerd[1909]: time="2025-01-29T16:25:27.205850095Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 16:25:27.206750 kubelet[2371]: I0129 16:25:27.206587 2371 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 29 16:25:27.505932 sudo[2222]: pam_unix(sudo:session): session closed for user root Jan 29 16:25:27.528499 sshd[2221]: Connection closed by 139.178.68.195 port 45326 Jan 29 16:25:27.529149 sshd-session[2219]: pam_unix(sshd:session): session closed for user core Jan 29 16:25:27.534174 systemd[1]: sshd@6-172.31.23.123:22-139.178.68.195:45326.service: Deactivated successfully. Jan 29 16:25:27.537152 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 16:25:27.537479 systemd[1]: session-7.scope: Consumed 504ms CPU time, 113.6M memory peak. Jan 29 16:25:27.539773 systemd-logind[1885]: Session 7 logged out. Waiting for processes to exit. Jan 29 16:25:27.541500 systemd-logind[1885]: Removed session 7. Jan 29 16:25:27.915947 kubelet[2371]: I0129 16:25:27.915898 2371 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 16:25:27.916413 kubelet[2371]: W0129 16:25:27.916095 2371 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 16:25:27.916413 kubelet[2371]: W0129 16:25:27.916140 2371 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 16:25:27.916413 kubelet[2371]: W0129 16:25:27.916167 2371 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 16:25:28.009105 kubelet[2371]: I0129 16:25:28.009005 2371 apiserver.go:52] "Watching apiserver" Jan 29 16:25:28.009105 kubelet[2371]: E0129 16:25:28.009024 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:25:28.026617 kubelet[2371]: I0129 16:25:28.025188 2371 topology_manager.go:215] "Topology Admit Handler" podUID="3c290ef4-c5de-4aff-9c50-00258865ebc3" podNamespace="kube-system" podName="cilium-chttv" Jan 29 16:25:28.026617 kubelet[2371]: I0129 16:25:28.025396 2371 topology_manager.go:215] "Topology Admit Handler" podUID="5b114bb3-bac0-4db4-b950-aae758fce113" podNamespace="kube-system" podName="kube-proxy-vhqwd" Jan 29 16:25:28.037060 systemd[1]: Created slice kubepods-besteffort-pod5b114bb3_bac0_4db4_b950_aae758fce113.slice - libcontainer container kubepods-besteffort-pod5b114bb3_bac0_4db4_b950_aae758fce113.slice. Jan 29 16:25:28.039917 kubelet[2371]: I0129 16:25:28.039886 2371 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 16:25:28.048168 kubelet[2371]: I0129 16:25:28.047348 2371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3c290ef4-c5de-4aff-9c50-00258865ebc3-cilium-cgroup\") pod \"cilium-chttv\" (UID: \"3c290ef4-c5de-4aff-9c50-00258865ebc3\") " pod="kube-system/cilium-chttv" Jan 29 16:25:28.048168 kubelet[2371]: I0129 16:25:28.047466 2371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c290ef4-c5de-4aff-9c50-00258865ebc3-lib-modules\") pod \"cilium-chttv\" (UID: \"3c290ef4-c5de-4aff-9c50-00258865ebc3\") " pod="kube-system/cilium-chttv" Jan 29 16:25:28.048168 kubelet[2371]: I0129 16:25:28.047500 2371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c290ef4-c5de-4aff-9c50-00258865ebc3-xtables-lock\") pod \"cilium-chttv\" (UID: \"3c290ef4-c5de-4aff-9c50-00258865ebc3\") " pod="kube-system/cilium-chttv" Jan 29 16:25:28.048168 kubelet[2371]: I0129 16:25:28.047526 2371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3c290ef4-c5de-4aff-9c50-00258865ebc3-cilium-config-path\") pod \"cilium-chttv\" (UID: \"3c290ef4-c5de-4aff-9c50-00258865ebc3\") " pod="kube-system/cilium-chttv" Jan 29 16:25:28.048168 kubelet[2371]: I0129 16:25:28.047550 2371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3c290ef4-c5de-4aff-9c50-00258865ebc3-host-proc-sys-kernel\") pod \"cilium-chttv\" (UID: \"3c290ef4-c5de-4aff-9c50-00258865ebc3\") " pod="kube-system/cilium-chttv" Jan 29 16:25:28.048168 kubelet[2371]: I0129 16:25:28.047594 2371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3c290ef4-c5de-4aff-9c50-00258865ebc3-hostproc\") pod \"cilium-chttv\" (UID: \"3c290ef4-c5de-4aff-9c50-00258865ebc3\") " pod="kube-system/cilium-chttv" Jan 29 16:25:28.048463 kubelet[2371]: I0129 16:25:28.047617 2371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3c290ef4-c5de-4aff-9c50-00258865ebc3-hubble-tls\") pod \"cilium-chttv\" (UID: \"3c290ef4-c5de-4aff-9c50-00258865ebc3\") " pod="kube-system/cilium-chttv" Jan 29 16:25:28.048463 kubelet[2371]: I0129 16:25:28.047639 2371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjd88\" (UniqueName: \"kubernetes.io/projected/5b114bb3-bac0-4db4-b950-aae758fce113-kube-api-access-kjd88\") pod \"kube-proxy-vhqwd\" (UID: \"5b114bb3-bac0-4db4-b950-aae758fce113\") " pod="kube-system/kube-proxy-vhqwd" Jan 29 16:25:28.048463 kubelet[2371]: I0129 16:25:28.047661 2371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5b114bb3-bac0-4db4-b950-aae758fce113-kube-proxy\") pod \"kube-proxy-vhqwd\" (UID: \"5b114bb3-bac0-4db4-b950-aae758fce113\") " pod="kube-system/kube-proxy-vhqwd" Jan 29 16:25:28.048463 kubelet[2371]: I0129 16:25:28.047695 2371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjtvl\" (UniqueName: \"kubernetes.io/projected/3c290ef4-c5de-4aff-9c50-00258865ebc3-kube-api-access-sjtvl\") pod \"cilium-chttv\" (UID: \"3c290ef4-c5de-4aff-9c50-00258865ebc3\") " pod="kube-system/cilium-chttv" Jan 29 16:25:28.048463 kubelet[2371]: I0129 16:25:28.047720 2371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b114bb3-bac0-4db4-b950-aae758fce113-xtables-lock\") pod \"kube-proxy-vhqwd\" (UID: \"5b114bb3-bac0-4db4-b950-aae758fce113\") " pod="kube-system/kube-proxy-vhqwd" Jan 29 16:25:28.048877 kubelet[2371]: I0129 16:25:28.047747 2371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3c290ef4-c5de-4aff-9c50-00258865ebc3-cilium-run\") pod \"cilium-chttv\" (UID: \"3c290ef4-c5de-4aff-9c50-00258865ebc3\") " pod="kube-system/cilium-chttv" Jan 29 16:25:28.048877 kubelet[2371]: I0129 16:25:28.047770 2371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3c290ef4-c5de-4aff-9c50-00258865ebc3-bpf-maps\") pod \"cilium-chttv\" (UID: \"3c290ef4-c5de-4aff-9c50-00258865ebc3\") " pod="kube-system/cilium-chttv" Jan 29 16:25:28.048877 kubelet[2371]: I0129 16:25:28.047794 2371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3c290ef4-c5de-4aff-9c50-00258865ebc3-cni-path\") pod \"cilium-chttv\" (UID: \"3c290ef4-c5de-4aff-9c50-00258865ebc3\") " pod="kube-system/cilium-chttv" Jan 29 16:25:28.048877 kubelet[2371]: I0129 16:25:28.047830 2371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3c290ef4-c5de-4aff-9c50-00258865ebc3-etc-cni-netd\") pod \"cilium-chttv\" (UID: \"3c290ef4-c5de-4aff-9c50-00258865ebc3\") " pod="kube-system/cilium-chttv" Jan 29 16:25:28.048877 kubelet[2371]: I0129 16:25:28.047859 2371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3c290ef4-c5de-4aff-9c50-00258865ebc3-clustermesh-secrets\") pod \"cilium-chttv\" (UID: \"3c290ef4-c5de-4aff-9c50-00258865ebc3\") " pod="kube-system/cilium-chttv" Jan 29 16:25:28.048877 kubelet[2371]: I0129 16:25:28.047882 2371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3c290ef4-c5de-4aff-9c50-00258865ebc3-host-proc-sys-net\") pod \"cilium-chttv\" (UID: \"3c290ef4-c5de-4aff-9c50-00258865ebc3\") " pod="kube-system/cilium-chttv" Jan 29 16:25:28.049083 kubelet[2371]: I0129 16:25:28.047913 2371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b114bb3-bac0-4db4-b950-aae758fce113-lib-modules\") pod \"kube-proxy-vhqwd\" (UID: \"5b114bb3-bac0-4db4-b950-aae758fce113\") " pod="kube-system/kube-proxy-vhqwd" Jan 29 16:25:28.051013 systemd[1]: Created slice kubepods-burstable-pod3c290ef4_c5de_4aff_9c50_00258865ebc3.slice - libcontainer container kubepods-burstable-pod3c290ef4_c5de_4aff_9c50_00258865ebc3.slice. Jan 29 16:25:28.347791 containerd[1909]: time="2025-01-29T16:25:28.347745019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vhqwd,Uid:5b114bb3-bac0-4db4-b950-aae758fce113,Namespace:kube-system,Attempt:0,}" Jan 29 16:25:28.365335 containerd[1909]: time="2025-01-29T16:25:28.365216856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-chttv,Uid:3c290ef4-c5de-4aff-9c50-00258865ebc3,Namespace:kube-system,Attempt:0,}" Jan 29 16:25:28.950957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount552108086.mount: Deactivated successfully. Jan 29 16:25:28.971330 containerd[1909]: time="2025-01-29T16:25:28.971274832Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:25:28.976595 containerd[1909]: time="2025-01-29T16:25:28.974793029Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 16:25:28.981940 containerd[1909]: time="2025-01-29T16:25:28.981888864Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:25:28.984350 containerd[1909]: time="2025-01-29T16:25:28.984304374Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:25:28.986707 containerd[1909]: time="2025-01-29T16:25:28.986666069Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:25:28.989505 containerd[1909]: time="2025-01-29T16:25:28.989454856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:25:28.991028 containerd[1909]: time="2025-01-29T16:25:28.990405203Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 642.532074ms" Jan 29 16:25:28.994998 containerd[1909]: time="2025-01-29T16:25:28.994964677Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 629.583495ms" Jan 29 16:25:29.009299 kubelet[2371]: E0129 16:25:29.009247 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:25:29.209248 containerd[1909]: time="2025-01-29T16:25:29.208873348Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:25:29.210160 containerd[1909]: time="2025-01-29T16:25:29.209011625Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:25:29.210160 containerd[1909]: time="2025-01-29T16:25:29.209035523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:25:29.210160 containerd[1909]: time="2025-01-29T16:25:29.209139646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:25:29.210830 containerd[1909]: time="2025-01-29T16:25:29.210163223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:25:29.210830 containerd[1909]: time="2025-01-29T16:25:29.210235587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:25:29.210830 containerd[1909]: time="2025-01-29T16:25:29.210271911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:25:29.210830 containerd[1909]: time="2025-01-29T16:25:29.210358812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:25:29.346865 systemd[1]: run-containerd-runc-k8s.io-9438c397b0b26b2a692da8340a51b4ded7550e9d5a22a8f7935493d392444f19-runc.uJ9SfI.mount: Deactivated successfully. Jan 29 16:25:29.358029 systemd[1]: Started cri-containerd-9438c397b0b26b2a692da8340a51b4ded7550e9d5a22a8f7935493d392444f19.scope - libcontainer container 9438c397b0b26b2a692da8340a51b4ded7550e9d5a22a8f7935493d392444f19. Jan 29 16:25:29.361583 systemd[1]: Started cri-containerd-da1739ae20c996da96d9b53ccc9f9387fd57152a2f7fdb978ba4b856ad1a7227.scope - libcontainer container da1739ae20c996da96d9b53ccc9f9387fd57152a2f7fdb978ba4b856ad1a7227. Jan 29 16:25:29.399727 containerd[1909]: time="2025-01-29T16:25:29.399539825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vhqwd,Uid:5b114bb3-bac0-4db4-b950-aae758fce113,Namespace:kube-system,Attempt:0,} returns sandbox id \"9438c397b0b26b2a692da8340a51b4ded7550e9d5a22a8f7935493d392444f19\"" Jan 29 16:25:29.405293 containerd[1909]: time="2025-01-29T16:25:29.404995077Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 16:25:29.411345 containerd[1909]: time="2025-01-29T16:25:29.411310852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-chttv,Uid:3c290ef4-c5de-4aff-9c50-00258865ebc3,Namespace:kube-system,Attempt:0,} returns sandbox id \"da1739ae20c996da96d9b53ccc9f9387fd57152a2f7fdb978ba4b856ad1a7227\"" Jan 29 16:25:30.009450 kubelet[2371]: E0129 16:25:30.009408 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:25:30.580935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3427878874.mount: Deactivated successfully. Jan 29 16:25:31.010439 kubelet[2371]: E0129 16:25:31.010319 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:25:31.137404 containerd[1909]: time="2025-01-29T16:25:31.137348508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:31.138534 containerd[1909]: time="2025-01-29T16:25:31.138358067Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 29 16:25:31.140659 containerd[1909]: time="2025-01-29T16:25:31.140004968Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:31.142969 containerd[1909]: time="2025-01-29T16:25:31.142934006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:31.143755 containerd[1909]: time="2025-01-29T16:25:31.143720880Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.738618642s" Jan 29 16:25:31.143876 containerd[1909]: time="2025-01-29T16:25:31.143855670Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 29 16:25:31.145017 containerd[1909]: time="2025-01-29T16:25:31.144994417Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 16:25:31.146543 containerd[1909]: time="2025-01-29T16:25:31.146456691Z" level=info msg="CreateContainer within sandbox \"9438c397b0b26b2a692da8340a51b4ded7550e9d5a22a8f7935493d392444f19\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 16:25:31.166261 containerd[1909]: time="2025-01-29T16:25:31.166208455Z" level=info msg="CreateContainer within sandbox \"9438c397b0b26b2a692da8340a51b4ded7550e9d5a22a8f7935493d392444f19\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0b4cff85d335e28e44e28b3047d20a6d199802414d7be97e556ae28786492205\"" Jan 29 16:25:31.167097 containerd[1909]: time="2025-01-29T16:25:31.167065121Z" level=info msg="StartContainer for \"0b4cff85d335e28e44e28b3047d20a6d199802414d7be97e556ae28786492205\"" Jan 29 16:25:31.209605 systemd[1]: Started cri-containerd-0b4cff85d335e28e44e28b3047d20a6d199802414d7be97e556ae28786492205.scope - libcontainer container 0b4cff85d335e28e44e28b3047d20a6d199802414d7be97e556ae28786492205. Jan 29 16:25:31.245728 containerd[1909]: time="2025-01-29T16:25:31.245661335Z" level=info msg="StartContainer for \"0b4cff85d335e28e44e28b3047d20a6d199802414d7be97e556ae28786492205\" returns successfully" Jan 29 16:25:32.011414 kubelet[2371]: E0129 16:25:32.011353 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:25:32.255431 kubelet[2371]: I0129 16:25:32.255346 2371 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vhqwd" podStartSLOduration=3.51413684 podStartE2EDuration="5.255326673s" podCreationTimestamp="2025-01-29 16:25:27 +0000 UTC" firstStartedPulling="2025-01-29 16:25:29.403635859 +0000 UTC m=+2.923083681" lastFinishedPulling="2025-01-29 16:25:31.144825678 +0000 UTC m=+4.664273514" observedRunningTime="2025-01-29 16:25:32.253538131 +0000 UTC m=+5.772985971" watchObservedRunningTime="2025-01-29 16:25:32.255326673 +0000 UTC m=+5.774774513" Jan 29 16:25:33.011924 kubelet[2371]: E0129 16:25:33.011801 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:25:34.012597 kubelet[2371]: E0129 16:25:34.012540 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:25:35.013614 kubelet[2371]: E0129 16:25:35.013474 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:25:36.014216 kubelet[2371]: E0129 16:25:36.014168 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:25:37.014312 kubelet[2371]: E0129 16:25:37.014276 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:25:37.656145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3246067435.mount: Deactivated successfully. Jan 29 16:25:38.014879 kubelet[2371]: E0129 16:25:38.014828 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:25:39.016068 kubelet[2371]: E0129 16:25:39.015907 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:25:40.016506 kubelet[2371]: E0129 16:25:40.016470 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:25:40.361461 containerd[1909]: time="2025-01-29T16:25:40.361335878Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:40.363768 containerd[1909]: time="2025-01-29T16:25:40.363596131Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 29 16:25:40.366168 containerd[1909]: time="2025-01-29T16:25:40.365803659Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:40.367615 containerd[1909]: time="2025-01-29T16:25:40.367564535Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.222080571s" Jan 29 16:25:40.367687 containerd[1909]: time="2025-01-29T16:25:40.367622346Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 29 16:25:40.370195 containerd[1909]: time="2025-01-29T16:25:40.370159662Z" level=info msg="CreateContainer within sandbox \"da1739ae20c996da96d9b53ccc9f9387fd57152a2f7fdb978ba4b856ad1a7227\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 16:25:40.394049 containerd[1909]: time="2025-01-29T16:25:40.393991597Z" level=info msg="CreateContainer within sandbox \"da1739ae20c996da96d9b53ccc9f9387fd57152a2f7fdb978ba4b856ad1a7227\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"af8591aea898ef949141e6e341ab45fc3d2daaa9f55fd4da0f26563dda83be25\"" Jan 29 16:25:40.394564 containerd[1909]: time="2025-01-29T16:25:40.394537597Z" level=info msg="StartContainer for \"af8591aea898ef949141e6e341ab45fc3d2daaa9f55fd4da0f26563dda83be25\"" Jan 29 16:25:40.431780 systemd[1]: Started cri-containerd-af8591aea898ef949141e6e341ab45fc3d2daaa9f55fd4da0f26563dda83be25.scope - libcontainer container af8591aea898ef949141e6e341ab45fc3d2daaa9f55fd4da0f26563dda83be25. Jan 29 16:25:40.462119 containerd[1909]: time="2025-01-29T16:25:40.462066070Z" level=info msg="StartContainer for \"af8591aea898ef949141e6e341ab45fc3d2daaa9f55fd4da0f26563dda83be25\" returns successfully" Jan 29 16:25:40.474110 systemd[1]: cri-containerd-af8591aea898ef949141e6e341ab45fc3d2daaa9f55fd4da0f26563dda83be25.scope: Deactivated successfully. Jan 29 16:25:40.498071 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af8591aea898ef949141e6e341ab45fc3d2daaa9f55fd4da0f26563dda83be25-rootfs.mount: Deactivated successfully. Jan 29 16:25:40.616097 containerd[1909]: time="2025-01-29T16:25:40.615949337Z" level=info msg="shim disconnected" id=af8591aea898ef949141e6e341ab45fc3d2daaa9f55fd4da0f26563dda83be25 namespace=k8s.io Jan 29 16:25:40.616097 containerd[1909]: time="2025-01-29T16:25:40.616011333Z" level=warning msg="cleaning up after shim disconnected" id=af8591aea898ef949141e6e341ab45fc3d2daaa9f55fd4da0f26563dda83be25 namespace=k8s.io Jan 29 16:25:40.616097 containerd[1909]: time="2025-01-29T16:25:40.616024821Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:25:41.017466 kubelet[2371]: E0129 16:25:41.017427 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:25:41.240444 containerd[1909]: time="2025-01-29T16:25:41.240402586Z" level=info msg="CreateContainer within sandbox \"da1739ae20c996da96d9b53ccc9f9387fd57152a2f7fdb978ba4b856ad1a7227\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 16:25:41.253632 containerd[1909]: time="2025-01-29T16:25:41.253586316Z" level=info msg="CreateContainer within sandbox \"da1739ae20c996da96d9b53ccc9f9387fd57152a2f7fdb978ba4b856ad1a7227\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a0b477d6f8f463cb8e1577823bfdae036aa557107442c0f38d14330413a35e4c\"" Jan 29 16:25:41.254626 containerd[1909]: time="2025-01-29T16:25:41.254173472Z" level=info msg="StartContainer for \"a0b477d6f8f463cb8e1577823bfdae036aa557107442c0f38d14330413a35e4c\"" Jan 29 16:25:41.285773 systemd[1]: Started cri-containerd-a0b477d6f8f463cb8e1577823bfdae036aa557107442c0f38d14330413a35e4c.scope - libcontainer container a0b477d6f8f463cb8e1577823bfdae036aa557107442c0f38d14330413a35e4c. Jan 29 16:25:41.315873 containerd[1909]: time="2025-01-29T16:25:41.315826952Z" level=info msg="StartContainer for \"a0b477d6f8f463cb8e1577823bfdae036aa557107442c0f38d14330413a35e4c\" returns successfully" Jan 29 16:25:41.326000 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:25:41.326949 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:25:41.327134 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:25:41.335940 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:25:41.336219 systemd[1]: cri-containerd-a0b477d6f8f463cb8e1577823bfdae036aa557107442c0f38d14330413a35e4c.scope: Deactivated successfully. Jan 29 16:25:41.360233 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:25:41.369891 containerd[1909]: time="2025-01-29T16:25:41.369336164Z" level=info msg="shim disconnected" id=a0b477d6f8f463cb8e1577823bfdae036aa557107442c0f38d14330413a35e4c namespace=k8s.io Jan 29 16:25:41.369891 containerd[1909]: time="2025-01-29T16:25:41.369418211Z" level=warning msg="cleaning up after shim disconnected" id=a0b477d6f8f463cb8e1577823bfdae036aa557107442c0f38d14330413a35e4c namespace=k8s.io Jan 29 16:25:41.369891 containerd[1909]: time="2025-01-29T16:25:41.369431281Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:25:42.018040 kubelet[2371]: E0129 16:25:42.017986 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:25:42.243245 containerd[1909]: time="2025-01-29T16:25:42.243202037Z" level=info msg="CreateContainer within sandbox \"da1739ae20c996da96d9b53ccc9f9387fd57152a2f7fdb978ba4b856ad1a7227\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 16:25:42.262374 containerd[1909]: time="2025-01-29T16:25:42.262323952Z" level=info msg="CreateContainer within sandbox \"da1739ae20c996da96d9b53ccc9f9387fd57152a2f7fdb978ba4b856ad1a7227\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4ade1fc9df52507ffce254409bd7be48b46cb2efcc26f92c6c4a44ddeddcccb7\"" Jan 29 16:25:42.262907 containerd[1909]: time="2025-01-29T16:25:42.262862712Z" level=info msg="StartContainer for \"4ade1fc9df52507ffce254409bd7be48b46cb2efcc26f92c6c4a44ddeddcccb7\"" Jan 29 16:25:42.299864 systemd[1]: Started cri-containerd-4ade1fc9df52507ffce254409bd7be48b46cb2efcc26f92c6c4a44ddeddcccb7.scope - libcontainer container 4ade1fc9df52507ffce254409bd7be48b46cb2efcc26f92c6c4a44ddeddcccb7. Jan 29 16:25:42.338831 systemd[1]: cri-containerd-4ade1fc9df52507ffce254409bd7be48b46cb2efcc26f92c6c4a44ddeddcccb7.scope: Deactivated successfully. Jan 29 16:25:42.339448 containerd[1909]: time="2025-01-29T16:25:42.338625014Z" level=info msg="StartContainer for \"4ade1fc9df52507ffce254409bd7be48b46cb2efcc26f92c6c4a44ddeddcccb7\" returns successfully" Jan 29 16:25:42.366731 containerd[1909]: time="2025-01-29T16:25:42.366644178Z" level=info msg="shim disconnected" id=4ade1fc9df52507ffce254409bd7be48b46cb2efcc26f92c6c4a44ddeddcccb7 namespace=k8s.io Jan 29 16:25:42.366731 containerd[1909]: time="2025-01-29T16:25:42.366706463Z" level=warning msg="cleaning up after shim disconnected" id=4ade1fc9df52507ffce254409bd7be48b46cb2efcc26f92c6c4a44ddeddcccb7 namespace=k8s.io Jan 29 16:25:42.366731 containerd[1909]: time="2025-01-29T16:25:42.366723088Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:25:42.380438 containerd[1909]: time="2025-01-29T16:25:42.380377552Z" level=warning msg="cleanup warnings time=\"2025-01-29T16:25:42Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 16:25:42.383883 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ade1fc9df52507ffce254409bd7be48b46cb2efcc26f92c6c4a44ddeddcccb7-rootfs.mount: Deactivated successfully. Jan 29 16:25:43.018835 kubelet[2371]: E0129 16:25:43.018760 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:25:43.247111 containerd[1909]: time="2025-01-29T16:25:43.247070989Z" level=info msg="CreateContainer within sandbox \"da1739ae20c996da96d9b53ccc9f9387fd57152a2f7fdb978ba4b856ad1a7227\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 16:25:43.264525 containerd[1909]: time="2025-01-29T16:25:43.264442855Z" level=info msg="CreateContainer within sandbox \"da1739ae20c996da96d9b53ccc9f9387fd57152a2f7fdb978ba4b856ad1a7227\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"45605f19ca33110cbefc32dbce4c8422478bc1af4f5474bdf77c5a1035551d13\"" Jan 29 16:25:43.265246 containerd[1909]: time="2025-01-29T16:25:43.265211538Z" level=info msg="StartContainer for \"45605f19ca33110cbefc32dbce4c8422478bc1af4f5474bdf77c5a1035551d13\"" Jan 29 16:25:43.297754 systemd[1]: Started cri-containerd-45605f19ca33110cbefc32dbce4c8422478bc1af4f5474bdf77c5a1035551d13.scope - libcontainer container 45605f19ca33110cbefc32dbce4c8422478bc1af4f5474bdf77c5a1035551d13. Jan 29 16:25:43.324286 systemd[1]: cri-containerd-45605f19ca33110cbefc32dbce4c8422478bc1af4f5474bdf77c5a1035551d13.scope: Deactivated successfully. Jan 29 16:25:43.325642 containerd[1909]: time="2025-01-29T16:25:43.325482236Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c290ef4_c5de_4aff_9c50_00258865ebc3.slice/cri-containerd-45605f19ca33110cbefc32dbce4c8422478bc1af4f5474bdf77c5a1035551d13.scope/memory.events\": no such file or directory" Jan 29 16:25:43.328207 containerd[1909]: time="2025-01-29T16:25:43.328140140Z" level=info msg="StartContainer for \"45605f19ca33110cbefc32dbce4c8422478bc1af4f5474bdf77c5a1035551d13\" returns successfully" Jan 29 16:25:43.351440 containerd[1909]: time="2025-01-29T16:25:43.351364477Z" level=info msg="shim disconnected" id=45605f19ca33110cbefc32dbce4c8422478bc1af4f5474bdf77c5a1035551d13 namespace=k8s.io Jan 29 16:25:43.351440 containerd[1909]: time="2025-01-29T16:25:43.351423184Z" level=warning msg="cleaning up after shim disconnected" id=45605f19ca33110cbefc32dbce4c8422478bc1af4f5474bdf77c5a1035551d13 namespace=k8s.io Jan 29 16:25:43.351440 containerd[1909]: time="2025-01-29T16:25:43.351435794Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:25:43.384044 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45605f19ca33110cbefc32dbce4c8422478bc1af4f5474bdf77c5a1035551d13-rootfs.mount: Deactivated successfully. Jan 29 16:25:44.019346 kubelet[2371]: E0129 16:25:44.019283 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:25:44.251322 containerd[1909]: time="2025-01-29T16:25:44.251284112Z" level=info msg="CreateContainer within sandbox \"da1739ae20c996da96d9b53ccc9f9387fd57152a2f7fdb978ba4b856ad1a7227\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 16:25:44.285552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2151466279.mount: Deactivated successfully. Jan 29 16:25:44.290245 containerd[1909]: time="2025-01-29T16:25:44.290196172Z" level=info msg="CreateContainer within sandbox \"da1739ae20c996da96d9b53ccc9f9387fd57152a2f7fdb978ba4b856ad1a7227\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2f4a21134b586da5e18376e3d6548d36e035443d857adf6edeafe9118fc3ed75\"" Jan 29 16:25:44.290872 containerd[1909]: time="2025-01-29T16:25:44.290839105Z" level=info msg="StartContainer for \"2f4a21134b586da5e18376e3d6548d36e035443d857adf6edeafe9118fc3ed75\"" Jan 29 16:25:44.322783 systemd[1]: Started cri-containerd-2f4a21134b586da5e18376e3d6548d36e035443d857adf6edeafe9118fc3ed75.scope - libcontainer container 2f4a21134b586da5e18376e3d6548d36e035443d857adf6edeafe9118fc3ed75. Jan 29 16:25:44.354375 containerd[1909]: time="2025-01-29T16:25:44.354327397Z" level=info msg="StartContainer for \"2f4a21134b586da5e18376e3d6548d36e035443d857adf6edeafe9118fc3ed75\" returns successfully" Jan 29 16:25:44.537404 kubelet[2371]: I0129 16:25:44.536243 2371 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 16:25:44.806619 kernel: Initializing XFRM netlink socket Jan 29 16:25:45.019979 kubelet[2371]: E0129 16:25:45.019923 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:25:45.293487 kubelet[2371]: I0129 16:25:45.293334 2371 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-chttv" podStartSLOduration=7.33761944 podStartE2EDuration="18.293313393s" podCreationTimestamp="2025-01-29 16:25:27 +0000 UTC" firstStartedPulling="2025-01-29 16:25:29.413029298 +0000 UTC m=+2.932477126" lastFinishedPulling="2025-01-29 16:25:40.368723242 +0000 UTC m=+13.888171079" observedRunningTime="2025-01-29 16:25:45.293271054 +0000 UTC m=+18.812718896" watchObservedRunningTime="2025-01-29 16:25:45.293313393 +0000 UTC m=+18.812761231" Jan 29 16:25:46.020592 kubelet[2371]: E0129 16:25:46.020526 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:25:46.495293 systemd-networkd[1741]: cilium_host: Link UP Jan 29 16:25:46.496227 systemd-networkd[1741]: cilium_net: Link UP Jan 29 16:25:46.496659 systemd-networkd[1741]: cilium_net: Gained carrier Jan 29 16:25:46.496892 systemd-networkd[1741]: cilium_host: Gained carrier Jan 29 16:25:46.498454 (udev-worker)[3032]: Network interface NamePolicy= disabled on kernel command line. Jan 29 16:25:46.499768 (udev-worker)[3033]: Network interface NamePolicy= disabled on kernel command line. Jan 29 16:25:46.640791 systemd-networkd[1741]: cilium_vxlan: Link UP Jan 29 16:25:46.640802 systemd-networkd[1741]: cilium_vxlan: Gained carrier Jan 29 16:25:46.651696 systemd-networkd[1741]: cilium_host: Gained IPv6LL Jan 29 16:25:46.700854 kubelet[2371]: I0129 16:25:46.699952 2371 topology_manager.go:215] "Topology Admit Handler" podUID="04090e33-037c-46fe-bebf-55c5d4d1689c" podNamespace="default" podName="nginx-deployment-85f456d6dd-qkbbg" Jan 29 16:25:46.710376 systemd[1]: Created slice kubepods-besteffort-pod04090e33_037c_46fe_bebf_55c5d4d1689c.slice - libcontainer container kubepods-besteffort-pod04090e33_037c_46fe_bebf_55c5d4d1689c.slice. Jan 29 16:25:46.780635 kubelet[2371]: I0129 16:25:46.780527 2371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b9lj\" (UniqueName: \"kubernetes.io/projected/04090e33-037c-46fe-bebf-55c5d4d1689c-kube-api-access-4b9lj\") pod \"nginx-deployment-85f456d6dd-qkbbg\" (UID: \"04090e33-037c-46fe-bebf-55c5d4d1689c\") " pod="default/nginx-deployment-85f456d6dd-qkbbg" Jan 29 16:25:46.964608 kernel: NET: Registered PF_ALG protocol family Jan 29 16:25:46.999885 kubelet[2371]: E0129 16:25:46.999841 2371 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:25:47.014212 containerd[1909]: time="2025-01-29T16:25:47.014161858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-qkbbg,Uid:04090e33-037c-46fe-bebf-55c5d4d1689c,Namespace:default,Attempt:0,}" Jan 29 16:25:47.021088 kubelet[2371]: E0129 16:25:47.021007 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:25:47.354800 systemd-networkd[1741]: cilium_net: Gained IPv6LL Jan 29 16:25:47.761692 systemd-networkd[1741]: lxc_health: Link UP Jan 29 16:25:47.770835 systemd-networkd[1741]: lxc_health: Gained carrier Jan 29 16:25:47.772009 (udev-worker)[3081]: Network interface NamePolicy= disabled on kernel command line. Jan 29 16:25:47.930797 systemd-networkd[1741]: cilium_vxlan: Gained IPv6LL Jan 29 16:25:48.021214 kubelet[2371]: E0129 16:25:48.021168 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:25:48.094870 systemd-networkd[1741]: lxc1d88c463d2db: Link UP Jan 29 16:25:48.103598 kernel: eth0: renamed from tmp944d7 Jan 29 16:25:48.107154 systemd-networkd[1741]: lxc1d88c463d2db: Gained carrier Jan 29 16:25:49.022186 kubelet[2371]: E0129 16:25:49.022127 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:25:49.467360 systemd-networkd[1741]: lxc1d88c463d2db: Gained IPv6LL Jan 29 16:25:49.659222 systemd-networkd[1741]: lxc_health: Gained IPv6LL Jan 29 16:25:49.755617 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 29 16:25:50.022659 kubelet[2371]: E0129 16:25:50.022544 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:25:51.023203 kubelet[2371]: E0129 16:25:51.023144 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:25:51.933808 ntpd[1877]: Listen normally on 7 cilium_host 192.168.1.243:123 Jan 29 16:25:51.935253 ntpd[1877]: 29 Jan 16:25:51 ntpd[1877]: Listen normally on 7 cilium_host 192.168.1.243:123 Jan 29 16:25:51.935253 ntpd[1877]: 29 Jan 16:25:51 ntpd[1877]: Listen normally on 8 cilium_net [fe80::501f:caff:fecb:94f0%3]:123 Jan 29 16:25:51.935253 ntpd[1877]: 29 Jan 16:25:51 ntpd[1877]: Listen normally on 9 cilium_host [fe80::34b5:c7ff:fee6:829f%4]:123 Jan 29 16:25:51.935253 ntpd[1877]: 29 Jan 16:25:51 ntpd[1877]: Listen normally on 10 cilium_vxlan [fe80::d8ec:42ff:fe92:48e1%5]:123 Jan 29 16:25:51.935253 ntpd[1877]: 29 Jan 16:25:51 ntpd[1877]: Listen normally on 11 lxc_health [fe80::d87b:26ff:fee2:d4ee%7]:123 Jan 29 16:25:51.935253 ntpd[1877]: 29 Jan 16:25:51 ntpd[1877]: Listen normally on 12 lxc1d88c463d2db [fe80::c07a:adff:fe0e:182c%9]:123 Jan 29 16:25:51.933953 ntpd[1877]: Listen normally on 8 cilium_net [fe80::501f:caff:fecb:94f0%3]:123 Jan 29 16:25:51.934012 ntpd[1877]: Listen normally on 9 cilium_host [fe80::34b5:c7ff:fee6:829f%4]:123 Jan 29 16:25:51.934054 ntpd[1877]: Listen normally on 10 cilium_vxlan [fe80::d8ec:42ff:fe92:48e1%5]:123 Jan 29 16:25:51.934096 ntpd[1877]: Listen normally on 11 lxc_health [fe80::d87b:26ff:fee2:d4ee%7]:123 Jan 29 16:25:51.934137 ntpd[1877]: Listen normally on 12 lxc1d88c463d2db [fe80::c07a:adff:fe0e:182c%9]:123 Jan 29 16:25:52.024341 kubelet[2371]: E0129 16:25:52.024287 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:25:53.025363 kubelet[2371]: E0129 16:25:53.025298 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:25:53.226612 containerd[1909]: time="2025-01-29T16:25:53.226485130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:25:53.226612 containerd[1909]: time="2025-01-29T16:25:53.226547441Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:25:53.227222 containerd[1909]: time="2025-01-29T16:25:53.226586913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:25:53.227222 containerd[1909]: time="2025-01-29T16:25:53.226692280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:25:53.281136 systemd[1]: run-containerd-runc-k8s.io-944d751544fa9724b4661731b9a0cc72db18ed556846d9c8f1c3875d94595ace-runc.1IQZq5.mount: Deactivated successfully. Jan 29 16:25:53.289765 systemd[1]: Started cri-containerd-944d751544fa9724b4661731b9a0cc72db18ed556846d9c8f1c3875d94595ace.scope - libcontainer container 944d751544fa9724b4661731b9a0cc72db18ed556846d9c8f1c3875d94595ace. Jan 29 16:25:53.337319 containerd[1909]: time="2025-01-29T16:25:53.337277530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-qkbbg,Uid:04090e33-037c-46fe-bebf-55c5d4d1689c,Namespace:default,Attempt:0,} returns sandbox id \"944d751544fa9724b4661731b9a0cc72db18ed556846d9c8f1c3875d94595ace\"" Jan 29 16:25:53.339401 containerd[1909]: time="2025-01-29T16:25:53.339326254Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 16:25:54.026358 kubelet[2371]: E0129 16:25:54.026300 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:25:55.029471 kubelet[2371]: E0129 16:25:55.026593 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:25:56.027471 kubelet[2371]: E0129 16:25:56.027418 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:25:56.175309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2093762427.mount: Deactivated successfully. Jan 29 16:25:57.028067 kubelet[2371]: E0129 16:25:57.028028 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:25:57.659419 containerd[1909]: time="2025-01-29T16:25:57.659348815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:57.661428 containerd[1909]: time="2025-01-29T16:25:57.661212299Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71015561" Jan 29 16:25:57.665409 containerd[1909]: time="2025-01-29T16:25:57.663426387Z" level=info msg="ImageCreate event name:\"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:57.667624 containerd[1909]: time="2025-01-29T16:25:57.667586373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:25:57.668779 containerd[1909]: time="2025-01-29T16:25:57.668631182Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 4.329246616s" Jan 29 16:25:57.668779 containerd[1909]: time="2025-01-29T16:25:57.668670829Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 29 16:25:57.671389 containerd[1909]: time="2025-01-29T16:25:57.671358294Z" level=info msg="CreateContainer within sandbox \"944d751544fa9724b4661731b9a0cc72db18ed556846d9c8f1c3875d94595ace\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 29 16:25:57.708251 containerd[1909]: time="2025-01-29T16:25:57.708202462Z" level=info msg="CreateContainer within sandbox \"944d751544fa9724b4661731b9a0cc72db18ed556846d9c8f1c3875d94595ace\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"a838d0c6b0fae10cd6b8d9feb26d7ad96aa465b0c30e82005ac8c769d3da4147\"" Jan 29 16:25:57.708906 containerd[1909]: time="2025-01-29T16:25:57.708874539Z" level=info msg="StartContainer for \"a838d0c6b0fae10cd6b8d9feb26d7ad96aa465b0c30e82005ac8c769d3da4147\"" Jan 29 16:25:57.752824 systemd[1]: Started cri-containerd-a838d0c6b0fae10cd6b8d9feb26d7ad96aa465b0c30e82005ac8c769d3da4147.scope - libcontainer container a838d0c6b0fae10cd6b8d9feb26d7ad96aa465b0c30e82005ac8c769d3da4147. Jan 29 16:25:57.785628 containerd[1909]: time="2025-01-29T16:25:57.785555528Z" level=info msg="StartContainer for \"a838d0c6b0fae10cd6b8d9feb26d7ad96aa465b0c30e82005ac8c769d3da4147\" returns successfully" Jan 29 16:25:58.028850 kubelet[2371]: E0129 16:25:58.028799 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:25:59.029717 kubelet[2371]: E0129 16:25:59.029659 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:00.030730 kubelet[2371]: E0129 16:26:00.030638 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:01.031851 kubelet[2371]: E0129 16:26:01.031791 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:02.032833 kubelet[2371]: E0129 16:26:02.032772 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:03.033056 kubelet[2371]: E0129 16:26:03.032999 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:04.033923 kubelet[2371]: E0129 16:26:04.033860 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:04.699770 update_engine[1886]: I20250129 16:26:04.699687 1886 update_attempter.cc:509] Updating boot flags... Jan 29 16:26:04.781635 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3578) Jan 29 16:26:05.034722 kubelet[2371]: E0129 16:26:05.034680 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:05.076054 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3579) Jan 29 16:26:05.305837 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3579) Jan 29 16:26:05.800338 kubelet[2371]: I0129 16:26:05.799786 2371 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-qkbbg" podStartSLOduration=15.468852153 podStartE2EDuration="19.799770617s" podCreationTimestamp="2025-01-29 16:25:46 +0000 UTC" firstStartedPulling="2025-01-29 16:25:53.338880425 +0000 UTC m=+26.858328257" lastFinishedPulling="2025-01-29 16:25:57.669798885 +0000 UTC m=+31.189246721" observedRunningTime="2025-01-29 16:25:58.324473706 +0000 UTC m=+31.843921543" watchObservedRunningTime="2025-01-29 16:26:05.799770617 +0000 UTC m=+39.319218457" Jan 29 16:26:05.800338 kubelet[2371]: I0129 16:26:05.799917 2371 topology_manager.go:215] "Topology Admit Handler" podUID="d4a93473-cf3d-48f7-9dd8-e348c61956dd" podNamespace="default" podName="nfs-server-provisioner-0" Jan 29 16:26:05.806764 systemd[1]: Created slice kubepods-besteffort-podd4a93473_cf3d_48f7_9dd8_e348c61956dd.slice - libcontainer container kubepods-besteffort-podd4a93473_cf3d_48f7_9dd8_e348c61956dd.slice. Jan 29 16:26:05.918875 kubelet[2371]: I0129 16:26:05.918821 2371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/d4a93473-cf3d-48f7-9dd8-e348c61956dd-data\") pod \"nfs-server-provisioner-0\" (UID: \"d4a93473-cf3d-48f7-9dd8-e348c61956dd\") " pod="default/nfs-server-provisioner-0" Jan 29 16:26:05.918875 kubelet[2371]: I0129 16:26:05.918880 2371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zds7l\" (UniqueName: \"kubernetes.io/projected/d4a93473-cf3d-48f7-9dd8-e348c61956dd-kube-api-access-zds7l\") pod \"nfs-server-provisioner-0\" (UID: \"d4a93473-cf3d-48f7-9dd8-e348c61956dd\") " pod="default/nfs-server-provisioner-0" Jan 29 16:26:06.036315 kubelet[2371]: E0129 16:26:06.036269 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:06.111142 containerd[1909]: time="2025-01-29T16:26:06.111009043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d4a93473-cf3d-48f7-9dd8-e348c61956dd,Namespace:default,Attempt:0,}" Jan 29 16:26:06.193674 (udev-worker)[3577]: Network interface NamePolicy= disabled on kernel command line. Jan 29 16:26:06.194719 (udev-worker)[3580]: Network interface NamePolicy= disabled on kernel command line. Jan 29 16:26:06.196457 systemd-networkd[1741]: lxc40005e4b42ce: Link UP Jan 29 16:26:06.201636 kernel: eth0: renamed from tmpcddbf Jan 29 16:26:06.204474 systemd-networkd[1741]: lxc40005e4b42ce: Gained carrier Jan 29 16:26:06.443794 containerd[1909]: time="2025-01-29T16:26:06.443602774Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:26:06.443973 containerd[1909]: time="2025-01-29T16:26:06.443680822Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:26:06.443973 containerd[1909]: time="2025-01-29T16:26:06.443699457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:06.444996 containerd[1909]: time="2025-01-29T16:26:06.444949400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:06.469236 systemd[1]: run-containerd-runc-k8s.io-cddbf0ed3e3864ed4f5207fbe8afe2d8ef7322b593e65a30d12c91569fea725a-runc.U9d98Y.mount: Deactivated successfully. Jan 29 16:26:06.481804 systemd[1]: Started cri-containerd-cddbf0ed3e3864ed4f5207fbe8afe2d8ef7322b593e65a30d12c91569fea725a.scope - libcontainer container cddbf0ed3e3864ed4f5207fbe8afe2d8ef7322b593e65a30d12c91569fea725a. Jan 29 16:26:06.541182 containerd[1909]: time="2025-01-29T16:26:06.541134188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d4a93473-cf3d-48f7-9dd8-e348c61956dd,Namespace:default,Attempt:0,} returns sandbox id \"cddbf0ed3e3864ed4f5207fbe8afe2d8ef7322b593e65a30d12c91569fea725a\"" Jan 29 16:26:06.543260 containerd[1909]: time="2025-01-29T16:26:06.543216476Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 29 16:26:06.996821 kubelet[2371]: E0129 16:26:06.996785 2371 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:07.039392 kubelet[2371]: E0129 16:26:07.039310 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:07.386819 systemd-networkd[1741]: lxc40005e4b42ce: Gained IPv6LL Jan 29 16:26:08.039867 kubelet[2371]: E0129 16:26:08.039806 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:08.939322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount356210649.mount: Deactivated successfully. Jan 29 16:26:09.040513 kubelet[2371]: E0129 16:26:09.040464 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:09.937044 ntpd[1877]: Listen normally on 13 lxc40005e4b42ce [fe80::e0b9:16ff:fe6c:ceed%11]:123 Jan 29 16:26:09.937413 ntpd[1877]: 29 Jan 16:26:09 ntpd[1877]: Listen normally on 13 lxc40005e4b42ce [fe80::e0b9:16ff:fe6c:ceed%11]:123 Jan 29 16:26:10.041514 kubelet[2371]: E0129 16:26:10.041474 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:11.042156 kubelet[2371]: E0129 16:26:11.042087 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:11.085246 containerd[1909]: time="2025-01-29T16:26:11.085185119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:11.087255 containerd[1909]: time="2025-01-29T16:26:11.087184390Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 29 16:26:11.093958 containerd[1909]: time="2025-01-29T16:26:11.093868499Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:11.100178 containerd[1909]: time="2025-01-29T16:26:11.100101041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:11.101382 containerd[1909]: time="2025-01-29T16:26:11.101338001Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.558078626s" Jan 29 16:26:11.101477 containerd[1909]: time="2025-01-29T16:26:11.101388607Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 29 16:26:11.106588 containerd[1909]: time="2025-01-29T16:26:11.106207616Z" level=info msg="CreateContainer within sandbox \"cddbf0ed3e3864ed4f5207fbe8afe2d8ef7322b593e65a30d12c91569fea725a\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 29 16:26:11.136531 containerd[1909]: time="2025-01-29T16:26:11.136475451Z" level=info msg="CreateContainer within sandbox \"cddbf0ed3e3864ed4f5207fbe8afe2d8ef7322b593e65a30d12c91569fea725a\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"fc792b4dfb9807cae8b4113f5db518bee09a953161655dad64241a5a8409f633\"" Jan 29 16:26:11.137339 containerd[1909]: time="2025-01-29T16:26:11.137306485Z" level=info msg="StartContainer for \"fc792b4dfb9807cae8b4113f5db518bee09a953161655dad64241a5a8409f633\"" Jan 29 16:26:11.170044 systemd[1]: run-containerd-runc-k8s.io-fc792b4dfb9807cae8b4113f5db518bee09a953161655dad64241a5a8409f633-runc.OAxteD.mount: Deactivated successfully. Jan 29 16:26:11.175745 systemd[1]: Started cri-containerd-fc792b4dfb9807cae8b4113f5db518bee09a953161655dad64241a5a8409f633.scope - libcontainer container fc792b4dfb9807cae8b4113f5db518bee09a953161655dad64241a5a8409f633. Jan 29 16:26:11.215633 containerd[1909]: time="2025-01-29T16:26:11.215560774Z" level=info msg="StartContainer for \"fc792b4dfb9807cae8b4113f5db518bee09a953161655dad64241a5a8409f633\" returns successfully" Jan 29 16:26:11.382486 kubelet[2371]: I0129 16:26:11.381546 2371 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.8216172510000002 podStartE2EDuration="6.381527535s" podCreationTimestamp="2025-01-29 16:26:05 +0000 UTC" firstStartedPulling="2025-01-29 16:26:06.54290805 +0000 UTC m=+40.062355876" lastFinishedPulling="2025-01-29 16:26:11.102818338 +0000 UTC m=+44.622266160" observedRunningTime="2025-01-29 16:26:11.381173496 +0000 UTC m=+44.900621370" watchObservedRunningTime="2025-01-29 16:26:11.381527535 +0000 UTC m=+44.900975397" Jan 29 16:26:12.043031 kubelet[2371]: E0129 16:26:12.042977 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:13.043349 kubelet[2371]: E0129 16:26:13.043286 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:14.044083 kubelet[2371]: E0129 16:26:14.044022 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:15.044556 kubelet[2371]: E0129 16:26:15.044494 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:16.044936 kubelet[2371]: E0129 16:26:16.044875 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:17.045704 kubelet[2371]: E0129 16:26:17.045645 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:18.045857 kubelet[2371]: E0129 16:26:18.045808 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:19.047022 kubelet[2371]: E0129 16:26:19.046975 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:20.047958 kubelet[2371]: E0129 16:26:20.047901 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:20.954563 kubelet[2371]: I0129 16:26:20.954512 2371 topology_manager.go:215] "Topology Admit Handler" podUID="4cc3181a-cb9e-46ec-bfc5-365255e4c7b9" podNamespace="default" podName="test-pod-1" Jan 29 16:26:20.970154 systemd[1]: Created slice kubepods-besteffort-pod4cc3181a_cb9e_46ec_bfc5_365255e4c7b9.slice - libcontainer container kubepods-besteffort-pod4cc3181a_cb9e_46ec_bfc5_365255e4c7b9.slice. Jan 29 16:26:21.048550 kubelet[2371]: E0129 16:26:21.048476 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:21.112108 kubelet[2371]: I0129 16:26:21.111832 2371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmdwz\" (UniqueName: \"kubernetes.io/projected/4cc3181a-cb9e-46ec-bfc5-365255e4c7b9-kube-api-access-tmdwz\") pod \"test-pod-1\" (UID: \"4cc3181a-cb9e-46ec-bfc5-365255e4c7b9\") " pod="default/test-pod-1" Jan 29 16:26:21.112108 kubelet[2371]: I0129 16:26:21.111907 2371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-031a77bb-42c7-45fe-bc10-7ef7f55c9066\" (UniqueName: \"kubernetes.io/nfs/4cc3181a-cb9e-46ec-bfc5-365255e4c7b9-pvc-031a77bb-42c7-45fe-bc10-7ef7f55c9066\") pod \"test-pod-1\" (UID: \"4cc3181a-cb9e-46ec-bfc5-365255e4c7b9\") " pod="default/test-pod-1" Jan 29 16:26:21.302597 kernel: FS-Cache: Loaded Jan 29 16:26:21.455317 kernel: RPC: Registered named UNIX socket transport module. Jan 29 16:26:21.455590 kernel: RPC: Registered udp transport module. Jan 29 16:26:21.455627 kernel: RPC: Registered tcp transport module. Jan 29 16:26:21.456437 kernel: RPC: Registered tcp-with-tls transport module. Jan 29 16:26:21.458427 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 29 16:26:21.749704 kernel: NFS: Registering the id_resolver key type Jan 29 16:26:21.749839 kernel: Key type id_resolver registered Jan 29 16:26:21.749880 kernel: Key type id_legacy registered Jan 29 16:26:21.790153 nfsidmap[4011]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 29 16:26:21.794302 nfsidmap[4012]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 29 16:26:21.886481 containerd[1909]: time="2025-01-29T16:26:21.886431151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:4cc3181a-cb9e-46ec-bfc5-365255e4c7b9,Namespace:default,Attempt:0,}" Jan 29 16:26:21.941605 kernel: eth0: renamed from tmp87c8f Jan 29 16:26:21.949076 systemd-networkd[1741]: lxc16f6e05b0f20: Link UP Jan 29 16:26:21.949427 systemd-networkd[1741]: lxc16f6e05b0f20: Gained carrier Jan 29 16:26:21.950056 (udev-worker)[4004]: Network interface NamePolicy= disabled on kernel command line. Jan 29 16:26:22.048793 kubelet[2371]: E0129 16:26:22.048678 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:22.280731 containerd[1909]: time="2025-01-29T16:26:22.280431643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:26:22.280731 containerd[1909]: time="2025-01-29T16:26:22.280512880Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:26:22.280731 containerd[1909]: time="2025-01-29T16:26:22.280529468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:22.281342 containerd[1909]: time="2025-01-29T16:26:22.280670868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:22.312803 systemd[1]: Started cri-containerd-87c8ff2a991836dea166b3885642f765f0d351efb4a4af31e0b67304d4916604.scope - libcontainer container 87c8ff2a991836dea166b3885642f765f0d351efb4a4af31e0b67304d4916604. Jan 29 16:26:22.377550 containerd[1909]: time="2025-01-29T16:26:22.377486529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:4cc3181a-cb9e-46ec-bfc5-365255e4c7b9,Namespace:default,Attempt:0,} returns sandbox id \"87c8ff2a991836dea166b3885642f765f0d351efb4a4af31e0b67304d4916604\"" Jan 29 16:26:22.382671 containerd[1909]: time="2025-01-29T16:26:22.382274912Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 16:26:22.741433 containerd[1909]: time="2025-01-29T16:26:22.741302801Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:22.743136 containerd[1909]: time="2025-01-29T16:26:22.743069495Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 29 16:26:22.748171 containerd[1909]: time="2025-01-29T16:26:22.748062691Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 365.750129ms" Jan 29 16:26:22.748171 containerd[1909]: time="2025-01-29T16:26:22.748165408Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 29 16:26:22.752687 containerd[1909]: time="2025-01-29T16:26:22.752536291Z" level=info msg="CreateContainer within sandbox \"87c8ff2a991836dea166b3885642f765f0d351efb4a4af31e0b67304d4916604\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 29 16:26:22.808548 containerd[1909]: time="2025-01-29T16:26:22.808500849Z" level=info msg="CreateContainer within sandbox \"87c8ff2a991836dea166b3885642f765f0d351efb4a4af31e0b67304d4916604\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"192cf2f5715968da5a974b222aaab414fdfd1a6d092b597593ff2a82a409224a\"" Jan 29 16:26:22.809336 containerd[1909]: time="2025-01-29T16:26:22.809303875Z" level=info msg="StartContainer for \"192cf2f5715968da5a974b222aaab414fdfd1a6d092b597593ff2a82a409224a\"" Jan 29 16:26:22.857908 systemd[1]: Started cri-containerd-192cf2f5715968da5a974b222aaab414fdfd1a6d092b597593ff2a82a409224a.scope - libcontainer container 192cf2f5715968da5a974b222aaab414fdfd1a6d092b597593ff2a82a409224a. Jan 29 16:26:22.906193 containerd[1909]: time="2025-01-29T16:26:22.904009333Z" level=info msg="StartContainer for \"192cf2f5715968da5a974b222aaab414fdfd1a6d092b597593ff2a82a409224a\" returns successfully" Jan 29 16:26:23.048958 kubelet[2371]: E0129 16:26:23.048853 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:23.131152 systemd-networkd[1741]: lxc16f6e05b0f20: Gained IPv6LL Jan 29 16:26:23.409404 kubelet[2371]: I0129 16:26:23.409252 2371 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=16.041241024 podStartE2EDuration="16.409237499s" podCreationTimestamp="2025-01-29 16:26:07 +0000 UTC" firstStartedPulling="2025-01-29 16:26:22.381791705 +0000 UTC m=+55.901239540" lastFinishedPulling="2025-01-29 16:26:22.749788178 +0000 UTC m=+56.269236015" observedRunningTime="2025-01-29 16:26:23.408745558 +0000 UTC m=+56.928193399" watchObservedRunningTime="2025-01-29 16:26:23.409237499 +0000 UTC m=+56.928685351" Jan 29 16:26:24.049468 kubelet[2371]: E0129 16:26:24.049422 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:25.050591 kubelet[2371]: E0129 16:26:25.050522 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:25.933838 ntpd[1877]: Listen normally on 14 lxc16f6e05b0f20 [fe80::9c0d:9fff:fe2e:e1b5%13]:123 Jan 29 16:26:25.934261 ntpd[1877]: 29 Jan 16:26:25 ntpd[1877]: Listen normally on 14 lxc16f6e05b0f20 [fe80::9c0d:9fff:fe2e:e1b5%13]:123 Jan 29 16:26:26.051781 kubelet[2371]: E0129 16:26:26.051715 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:26.997230 kubelet[2371]: E0129 16:26:26.997193 2371 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:27.052220 kubelet[2371]: E0129 16:26:27.052121 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:27.058729 containerd[1909]: time="2025-01-29T16:26:27.058619310Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:26:27.066671 containerd[1909]: time="2025-01-29T16:26:27.066202017Z" level=info msg="StopContainer for \"2f4a21134b586da5e18376e3d6548d36e035443d857adf6edeafe9118fc3ed75\" with timeout 2 (s)" Jan 29 16:26:27.067150 containerd[1909]: time="2025-01-29T16:26:27.067121619Z" level=info msg="Stop container \"2f4a21134b586da5e18376e3d6548d36e035443d857adf6edeafe9118fc3ed75\" with signal terminated" Jan 29 16:26:27.081101 systemd-networkd[1741]: lxc_health: Link DOWN Jan 29 16:26:27.081109 systemd-networkd[1741]: lxc_health: Lost carrier Jan 29 16:26:27.106175 systemd[1]: cri-containerd-2f4a21134b586da5e18376e3d6548d36e035443d857adf6edeafe9118fc3ed75.scope: Deactivated successfully. Jan 29 16:26:27.106623 systemd[1]: cri-containerd-2f4a21134b586da5e18376e3d6548d36e035443d857adf6edeafe9118fc3ed75.scope: Consumed 8.212s CPU time, 126.6M memory peak, 796K read from disk, 13.3M written to disk. Jan 29 16:26:27.153268 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f4a21134b586da5e18376e3d6548d36e035443d857adf6edeafe9118fc3ed75-rootfs.mount: Deactivated successfully. Jan 29 16:26:27.192091 kubelet[2371]: E0129 16:26:27.192010 2371 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 16:26:27.214504 containerd[1909]: time="2025-01-29T16:26:27.213298206Z" level=info msg="shim disconnected" id=2f4a21134b586da5e18376e3d6548d36e035443d857adf6edeafe9118fc3ed75 namespace=k8s.io Jan 29 16:26:27.214504 containerd[1909]: time="2025-01-29T16:26:27.214495319Z" level=warning msg="cleaning up after shim disconnected" id=2f4a21134b586da5e18376e3d6548d36e035443d857adf6edeafe9118fc3ed75 namespace=k8s.io Jan 29 16:26:27.214504 containerd[1909]: time="2025-01-29T16:26:27.214507906Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:26:27.246554 containerd[1909]: time="2025-01-29T16:26:27.246510389Z" level=info msg="StopContainer for \"2f4a21134b586da5e18376e3d6548d36e035443d857adf6edeafe9118fc3ed75\" returns successfully" Jan 29 16:26:27.255610 containerd[1909]: time="2025-01-29T16:26:27.254988843Z" level=info msg="StopPodSandbox for \"da1739ae20c996da96d9b53ccc9f9387fd57152a2f7fdb978ba4b856ad1a7227\"" Jan 29 16:26:27.263677 containerd[1909]: time="2025-01-29T16:26:27.255046919Z" level=info msg="Container to stop \"45605f19ca33110cbefc32dbce4c8422478bc1af4f5474bdf77c5a1035551d13\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:26:27.263677 containerd[1909]: time="2025-01-29T16:26:27.263672505Z" level=info msg="Container to stop \"2f4a21134b586da5e18376e3d6548d36e035443d857adf6edeafe9118fc3ed75\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:26:27.264197 containerd[1909]: time="2025-01-29T16:26:27.263694954Z" level=info msg="Container to stop \"af8591aea898ef949141e6e341ab45fc3d2daaa9f55fd4da0f26563dda83be25\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:26:27.264197 containerd[1909]: time="2025-01-29T16:26:27.263753398Z" level=info msg="Container to stop \"4ade1fc9df52507ffce254409bd7be48b46cb2efcc26f92c6c4a44ddeddcccb7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:26:27.264197 containerd[1909]: time="2025-01-29T16:26:27.263768380Z" level=info msg="Container to stop \"a0b477d6f8f463cb8e1577823bfdae036aa557107442c0f38d14330413a35e4c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:26:27.274376 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-da1739ae20c996da96d9b53ccc9f9387fd57152a2f7fdb978ba4b856ad1a7227-shm.mount: Deactivated successfully. Jan 29 16:26:27.284042 systemd[1]: cri-containerd-da1739ae20c996da96d9b53ccc9f9387fd57152a2f7fdb978ba4b856ad1a7227.scope: Deactivated successfully. Jan 29 16:26:27.318581 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da1739ae20c996da96d9b53ccc9f9387fd57152a2f7fdb978ba4b856ad1a7227-rootfs.mount: Deactivated successfully. Jan 29 16:26:27.334695 containerd[1909]: time="2025-01-29T16:26:27.334477638Z" level=info msg="shim disconnected" id=da1739ae20c996da96d9b53ccc9f9387fd57152a2f7fdb978ba4b856ad1a7227 namespace=k8s.io Jan 29 16:26:27.334695 containerd[1909]: time="2025-01-29T16:26:27.334655013Z" level=warning msg="cleaning up after shim disconnected" id=da1739ae20c996da96d9b53ccc9f9387fd57152a2f7fdb978ba4b856ad1a7227 namespace=k8s.io Jan 29 16:26:27.334695 containerd[1909]: time="2025-01-29T16:26:27.334697371Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:26:27.359352 containerd[1909]: time="2025-01-29T16:26:27.359310801Z" level=info msg="TearDown network for sandbox \"da1739ae20c996da96d9b53ccc9f9387fd57152a2f7fdb978ba4b856ad1a7227\" successfully" Jan 29 16:26:27.359498 containerd[1909]: time="2025-01-29T16:26:27.359424415Z" level=info msg="StopPodSandbox for \"da1739ae20c996da96d9b53ccc9f9387fd57152a2f7fdb978ba4b856ad1a7227\" returns successfully" Jan 29 16:26:27.413423 kubelet[2371]: I0129 16:26:27.413263 2371 scope.go:117] "RemoveContainer" containerID="2f4a21134b586da5e18376e3d6548d36e035443d857adf6edeafe9118fc3ed75" Jan 29 16:26:27.418201 containerd[1909]: time="2025-01-29T16:26:27.418153906Z" level=info msg="RemoveContainer for \"2f4a21134b586da5e18376e3d6548d36e035443d857adf6edeafe9118fc3ed75\"" Jan 29 16:26:27.425552 containerd[1909]: time="2025-01-29T16:26:27.424626117Z" level=info msg="RemoveContainer for \"2f4a21134b586da5e18376e3d6548d36e035443d857adf6edeafe9118fc3ed75\" returns successfully" Jan 29 16:26:27.425716 kubelet[2371]: I0129 16:26:27.425443 2371 scope.go:117] "RemoveContainer" containerID="45605f19ca33110cbefc32dbce4c8422478bc1af4f5474bdf77c5a1035551d13" Jan 29 16:26:27.426817 containerd[1909]: time="2025-01-29T16:26:27.426766147Z" level=info msg="RemoveContainer for \"45605f19ca33110cbefc32dbce4c8422478bc1af4f5474bdf77c5a1035551d13\"" Jan 29 16:26:27.434236 containerd[1909]: time="2025-01-29T16:26:27.434184113Z" level=info msg="RemoveContainer for \"45605f19ca33110cbefc32dbce4c8422478bc1af4f5474bdf77c5a1035551d13\" returns successfully" Jan 29 16:26:27.434708 kubelet[2371]: I0129 16:26:27.434649 2371 scope.go:117] "RemoveContainer" containerID="4ade1fc9df52507ffce254409bd7be48b46cb2efcc26f92c6c4a44ddeddcccb7" Jan 29 16:26:27.436316 containerd[1909]: time="2025-01-29T16:26:27.436284501Z" level=info msg="RemoveContainer for \"4ade1fc9df52507ffce254409bd7be48b46cb2efcc26f92c6c4a44ddeddcccb7\"" Jan 29 16:26:27.448405 containerd[1909]: time="2025-01-29T16:26:27.448358901Z" level=info msg="RemoveContainer for \"4ade1fc9df52507ffce254409bd7be48b46cb2efcc26f92c6c4a44ddeddcccb7\" returns successfully" Jan 29 16:26:27.448666 kubelet[2371]: I0129 16:26:27.448634 2371 scope.go:117] "RemoveContainer" containerID="a0b477d6f8f463cb8e1577823bfdae036aa557107442c0f38d14330413a35e4c" Jan 29 16:26:27.449701 containerd[1909]: time="2025-01-29T16:26:27.449671374Z" level=info msg="RemoveContainer for \"a0b477d6f8f463cb8e1577823bfdae036aa557107442c0f38d14330413a35e4c\"" Jan 29 16:26:27.455863 containerd[1909]: time="2025-01-29T16:26:27.455759549Z" level=info msg="RemoveContainer for \"a0b477d6f8f463cb8e1577823bfdae036aa557107442c0f38d14330413a35e4c\" returns successfully" Jan 29 16:26:27.456089 kubelet[2371]: I0129 16:26:27.456061 2371 scope.go:117] "RemoveContainer" containerID="af8591aea898ef949141e6e341ab45fc3d2daaa9f55fd4da0f26563dda83be25" Jan 29 16:26:27.458419 kubelet[2371]: I0129 16:26:27.457947 2371 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3c290ef4-c5de-4aff-9c50-00258865ebc3-clustermesh-secrets\") pod \"3c290ef4-c5de-4aff-9c50-00258865ebc3\" (UID: \"3c290ef4-c5de-4aff-9c50-00258865ebc3\") " Jan 29 16:26:27.458419 kubelet[2371]: I0129 16:26:27.458008 2371 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3c290ef4-c5de-4aff-9c50-00258865ebc3-host-proc-sys-net\") pod \"3c290ef4-c5de-4aff-9c50-00258865ebc3\" (UID: \"3c290ef4-c5de-4aff-9c50-00258865ebc3\") " Jan 29 16:26:27.458419 kubelet[2371]: I0129 16:26:27.458039 2371 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3c290ef4-c5de-4aff-9c50-00258865ebc3-cilium-cgroup\") pod \"3c290ef4-c5de-4aff-9c50-00258865ebc3\" (UID: \"3c290ef4-c5de-4aff-9c50-00258865ebc3\") " Jan 29 16:26:27.458419 kubelet[2371]: I0129 16:26:27.458069 2371 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3c290ef4-c5de-4aff-9c50-00258865ebc3-hubble-tls\") pod \"3c290ef4-c5de-4aff-9c50-00258865ebc3\" (UID: \"3c290ef4-c5de-4aff-9c50-00258865ebc3\") " Jan 29 16:26:27.458419 kubelet[2371]: I0129 16:26:27.458093 2371 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3c290ef4-c5de-4aff-9c50-00258865ebc3-bpf-maps\") pod \"3c290ef4-c5de-4aff-9c50-00258865ebc3\" (UID: \"3c290ef4-c5de-4aff-9c50-00258865ebc3\") " Jan 29 16:26:27.458419 kubelet[2371]: I0129 16:26:27.458120 2371 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3c290ef4-c5de-4aff-9c50-00258865ebc3-cilium-config-path\") pod \"3c290ef4-c5de-4aff-9c50-00258865ebc3\" (UID: \"3c290ef4-c5de-4aff-9c50-00258865ebc3\") " Jan 29 16:26:27.458768 kubelet[2371]: I0129 16:26:27.458145 2371 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3c290ef4-c5de-4aff-9c50-00258865ebc3-host-proc-sys-kernel\") pod \"3c290ef4-c5de-4aff-9c50-00258865ebc3\" (UID: \"3c290ef4-c5de-4aff-9c50-00258865ebc3\") " Jan 29 16:26:27.458768 kubelet[2371]: I0129 16:26:27.458168 2371 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c290ef4-c5de-4aff-9c50-00258865ebc3-xtables-lock\") pod \"3c290ef4-c5de-4aff-9c50-00258865ebc3\" (UID: \"3c290ef4-c5de-4aff-9c50-00258865ebc3\") " Jan 29 16:26:27.458768 kubelet[2371]: I0129 16:26:27.458189 2371 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3c290ef4-c5de-4aff-9c50-00258865ebc3-hostproc\") pod \"3c290ef4-c5de-4aff-9c50-00258865ebc3\" (UID: \"3c290ef4-c5de-4aff-9c50-00258865ebc3\") " Jan 29 16:26:27.458768 kubelet[2371]: I0129 16:26:27.458214 2371 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjtvl\" (UniqueName: \"kubernetes.io/projected/3c290ef4-c5de-4aff-9c50-00258865ebc3-kube-api-access-sjtvl\") pod \"3c290ef4-c5de-4aff-9c50-00258865ebc3\" (UID: \"3c290ef4-c5de-4aff-9c50-00258865ebc3\") " Jan 29 16:26:27.458768 kubelet[2371]: I0129 16:26:27.458239 2371 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3c290ef4-c5de-4aff-9c50-00258865ebc3-cilium-run\") pod \"3c290ef4-c5de-4aff-9c50-00258865ebc3\" (UID: \"3c290ef4-c5de-4aff-9c50-00258865ebc3\") " Jan 29 16:26:27.458768 kubelet[2371]: I0129 16:26:27.458261 2371 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3c290ef4-c5de-4aff-9c50-00258865ebc3-etc-cni-netd\") pod \"3c290ef4-c5de-4aff-9c50-00258865ebc3\" (UID: \"3c290ef4-c5de-4aff-9c50-00258865ebc3\") " Jan 29 16:26:27.459008 kubelet[2371]: I0129 16:26:27.458284 2371 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c290ef4-c5de-4aff-9c50-00258865ebc3-lib-modules\") pod \"3c290ef4-c5de-4aff-9c50-00258865ebc3\" (UID: \"3c290ef4-c5de-4aff-9c50-00258865ebc3\") " Jan 29 16:26:27.459008 kubelet[2371]: I0129 16:26:27.458305 2371 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3c290ef4-c5de-4aff-9c50-00258865ebc3-cni-path\") pod \"3c290ef4-c5de-4aff-9c50-00258865ebc3\" (UID: \"3c290ef4-c5de-4aff-9c50-00258865ebc3\") " Jan 29 16:26:27.465691 kubelet[2371]: I0129 16:26:27.465647 2371 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c290ef4-c5de-4aff-9c50-00258865ebc3-cni-path" (OuterVolumeSpecName: "cni-path") pod "3c290ef4-c5de-4aff-9c50-00258865ebc3" (UID: "3c290ef4-c5de-4aff-9c50-00258865ebc3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:26:27.466345 kubelet[2371]: I0129 16:26:27.465933 2371 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c290ef4-c5de-4aff-9c50-00258865ebc3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3c290ef4-c5de-4aff-9c50-00258865ebc3" (UID: "3c290ef4-c5de-4aff-9c50-00258865ebc3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:26:27.466345 kubelet[2371]: I0129 16:26:27.465969 2371 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c290ef4-c5de-4aff-9c50-00258865ebc3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3c290ef4-c5de-4aff-9c50-00258865ebc3" (UID: "3c290ef4-c5de-4aff-9c50-00258865ebc3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:26:27.466946 kubelet[2371]: I0129 16:26:27.466917 2371 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c290ef4-c5de-4aff-9c50-00258865ebc3-hostproc" (OuterVolumeSpecName: "hostproc") pod "3c290ef4-c5de-4aff-9c50-00258865ebc3" (UID: "3c290ef4-c5de-4aff-9c50-00258865ebc3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:26:27.467089 kubelet[2371]: I0129 16:26:27.467070 2371 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c290ef4-c5de-4aff-9c50-00258865ebc3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3c290ef4-c5de-4aff-9c50-00258865ebc3" (UID: "3c290ef4-c5de-4aff-9c50-00258865ebc3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:26:27.469750 kubelet[2371]: I0129 16:26:27.469672 2371 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c290ef4-c5de-4aff-9c50-00258865ebc3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3c290ef4-c5de-4aff-9c50-00258865ebc3" (UID: "3c290ef4-c5de-4aff-9c50-00258865ebc3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:26:27.469750 kubelet[2371]: I0129 16:26:27.469705 2371 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c290ef4-c5de-4aff-9c50-00258865ebc3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3c290ef4-c5de-4aff-9c50-00258865ebc3" (UID: "3c290ef4-c5de-4aff-9c50-00258865ebc3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:26:27.470143 kubelet[2371]: I0129 16:26:27.470121 2371 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c290ef4-c5de-4aff-9c50-00258865ebc3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3c290ef4-c5de-4aff-9c50-00258865ebc3" (UID: "3c290ef4-c5de-4aff-9c50-00258865ebc3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:26:27.470296 kubelet[2371]: I0129 16:26:27.470280 2371 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c290ef4-c5de-4aff-9c50-00258865ebc3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3c290ef4-c5de-4aff-9c50-00258865ebc3" (UID: "3c290ef4-c5de-4aff-9c50-00258865ebc3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:26:27.470952 kubelet[2371]: I0129 16:26:27.470373 2371 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c290ef4-c5de-4aff-9c50-00258865ebc3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3c290ef4-c5de-4aff-9c50-00258865ebc3" (UID: "3c290ef4-c5de-4aff-9c50-00258865ebc3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:26:27.471036 containerd[1909]: time="2025-01-29T16:26:27.470559154Z" level=info msg="RemoveContainer for \"af8591aea898ef949141e6e341ab45fc3d2daaa9f55fd4da0f26563dda83be25\"" Jan 29 16:26:27.471495 kubelet[2371]: I0129 16:26:27.471452 2371 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c290ef4-c5de-4aff-9c50-00258865ebc3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3c290ef4-c5de-4aff-9c50-00258865ebc3" (UID: "3c290ef4-c5de-4aff-9c50-00258865ebc3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:26:27.476392 kubelet[2371]: I0129 16:26:27.475151 2371 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c290ef4-c5de-4aff-9c50-00258865ebc3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3c290ef4-c5de-4aff-9c50-00258865ebc3" (UID: "3c290ef4-c5de-4aff-9c50-00258865ebc3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:26:27.476517 containerd[1909]: time="2025-01-29T16:26:27.476270565Z" level=info msg="RemoveContainer for \"af8591aea898ef949141e6e341ab45fc3d2daaa9f55fd4da0f26563dda83be25\" returns successfully" Jan 29 16:26:27.476591 kubelet[2371]: I0129 16:26:27.476559 2371 scope.go:117] "RemoveContainer" containerID="2f4a21134b586da5e18376e3d6548d36e035443d857adf6edeafe9118fc3ed75" Jan 29 16:26:27.476748 kubelet[2371]: I0129 16:26:27.476716 2371 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c290ef4-c5de-4aff-9c50-00258865ebc3-kube-api-access-sjtvl" (OuterVolumeSpecName: "kube-api-access-sjtvl") pod "3c290ef4-c5de-4aff-9c50-00258865ebc3" (UID: "3c290ef4-c5de-4aff-9c50-00258865ebc3"). InnerVolumeSpecName "kube-api-access-sjtvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:26:27.476923 kubelet[2371]: I0129 16:26:27.476796 2371 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c290ef4-c5de-4aff-9c50-00258865ebc3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3c290ef4-c5de-4aff-9c50-00258865ebc3" (UID: "3c290ef4-c5de-4aff-9c50-00258865ebc3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:26:27.477179 containerd[1909]: time="2025-01-29T16:26:27.477124695Z" level=error msg="ContainerStatus for \"2f4a21134b586da5e18376e3d6548d36e035443d857adf6edeafe9118fc3ed75\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2f4a21134b586da5e18376e3d6548d36e035443d857adf6edeafe9118fc3ed75\": not found" Jan 29 16:26:27.487852 kubelet[2371]: E0129 16:26:27.487534 2371 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2f4a21134b586da5e18376e3d6548d36e035443d857adf6edeafe9118fc3ed75\": not found" containerID="2f4a21134b586da5e18376e3d6548d36e035443d857adf6edeafe9118fc3ed75" Jan 29 16:26:27.487995 kubelet[2371]: I0129 16:26:27.487875 2371 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2f4a21134b586da5e18376e3d6548d36e035443d857adf6edeafe9118fc3ed75"} err="failed to get container status \"2f4a21134b586da5e18376e3d6548d36e035443d857adf6edeafe9118fc3ed75\": rpc error: code = NotFound desc = an error occurred when try to find container \"2f4a21134b586da5e18376e3d6548d36e035443d857adf6edeafe9118fc3ed75\": not found" Jan 29 16:26:27.487995 kubelet[2371]: I0129 16:26:27.487990 2371 scope.go:117] "RemoveContainer" containerID="45605f19ca33110cbefc32dbce4c8422478bc1af4f5474bdf77c5a1035551d13" Jan 29 16:26:27.488480 containerd[1909]: time="2025-01-29T16:26:27.488432206Z" level=error msg="ContainerStatus for \"45605f19ca33110cbefc32dbce4c8422478bc1af4f5474bdf77c5a1035551d13\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"45605f19ca33110cbefc32dbce4c8422478bc1af4f5474bdf77c5a1035551d13\": not found" Jan 29 16:26:27.489333 kubelet[2371]: E0129 16:26:27.488758 2371 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"45605f19ca33110cbefc32dbce4c8422478bc1af4f5474bdf77c5a1035551d13\": not found" containerID="45605f19ca33110cbefc32dbce4c8422478bc1af4f5474bdf77c5a1035551d13" Jan 29 16:26:27.489472 kubelet[2371]: I0129 16:26:27.489324 2371 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"45605f19ca33110cbefc32dbce4c8422478bc1af4f5474bdf77c5a1035551d13"} err="failed to get container status \"45605f19ca33110cbefc32dbce4c8422478bc1af4f5474bdf77c5a1035551d13\": rpc error: code = NotFound desc = an error occurred when try to find container \"45605f19ca33110cbefc32dbce4c8422478bc1af4f5474bdf77c5a1035551d13\": not found" Jan 29 16:26:27.489472 kubelet[2371]: I0129 16:26:27.489356 2371 scope.go:117] "RemoveContainer" containerID="4ade1fc9df52507ffce254409bd7be48b46cb2efcc26f92c6c4a44ddeddcccb7" Jan 29 16:26:27.489669 containerd[1909]: time="2025-01-29T16:26:27.489632238Z" level=error msg="ContainerStatus for \"4ade1fc9df52507ffce254409bd7be48b46cb2efcc26f92c6c4a44ddeddcccb7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4ade1fc9df52507ffce254409bd7be48b46cb2efcc26f92c6c4a44ddeddcccb7\": not found" Jan 29 16:26:27.489805 kubelet[2371]: E0129 16:26:27.489779 2371 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4ade1fc9df52507ffce254409bd7be48b46cb2efcc26f92c6c4a44ddeddcccb7\": not found" containerID="4ade1fc9df52507ffce254409bd7be48b46cb2efcc26f92c6c4a44ddeddcccb7" Jan 29 16:26:27.489877 kubelet[2371]: I0129 16:26:27.489807 2371 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4ade1fc9df52507ffce254409bd7be48b46cb2efcc26f92c6c4a44ddeddcccb7"} err="failed to get container status \"4ade1fc9df52507ffce254409bd7be48b46cb2efcc26f92c6c4a44ddeddcccb7\": rpc error: code = NotFound desc = an error occurred when try to find container \"4ade1fc9df52507ffce254409bd7be48b46cb2efcc26f92c6c4a44ddeddcccb7\": not found" Jan 29 16:26:27.489877 kubelet[2371]: I0129 16:26:27.489829 2371 scope.go:117] "RemoveContainer" containerID="a0b477d6f8f463cb8e1577823bfdae036aa557107442c0f38d14330413a35e4c" Jan 29 16:26:27.490069 containerd[1909]: time="2025-01-29T16:26:27.490033544Z" level=error msg="ContainerStatus for \"a0b477d6f8f463cb8e1577823bfdae036aa557107442c0f38d14330413a35e4c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a0b477d6f8f463cb8e1577823bfdae036aa557107442c0f38d14330413a35e4c\": not found" Jan 29 16:26:27.490174 kubelet[2371]: E0129 16:26:27.490154 2371 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a0b477d6f8f463cb8e1577823bfdae036aa557107442c0f38d14330413a35e4c\": not found" containerID="a0b477d6f8f463cb8e1577823bfdae036aa557107442c0f38d14330413a35e4c" Jan 29 16:26:27.490245 kubelet[2371]: I0129 16:26:27.490181 2371 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a0b477d6f8f463cb8e1577823bfdae036aa557107442c0f38d14330413a35e4c"} err="failed to get container status \"a0b477d6f8f463cb8e1577823bfdae036aa557107442c0f38d14330413a35e4c\": rpc error: code = NotFound desc = an error occurred when try to find container \"a0b477d6f8f463cb8e1577823bfdae036aa557107442c0f38d14330413a35e4c\": not found" Jan 29 16:26:27.490245 kubelet[2371]: I0129 16:26:27.490200 2371 scope.go:117] "RemoveContainer" containerID="af8591aea898ef949141e6e341ab45fc3d2daaa9f55fd4da0f26563dda83be25" Jan 29 16:26:27.490724 kubelet[2371]: E0129 16:26:27.490661 2371 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"af8591aea898ef949141e6e341ab45fc3d2daaa9f55fd4da0f26563dda83be25\": not found" containerID="af8591aea898ef949141e6e341ab45fc3d2daaa9f55fd4da0f26563dda83be25" Jan 29 16:26:27.490724 kubelet[2371]: I0129 16:26:27.490714 2371 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"af8591aea898ef949141e6e341ab45fc3d2daaa9f55fd4da0f26563dda83be25"} err="failed to get container status \"af8591aea898ef949141e6e341ab45fc3d2daaa9f55fd4da0f26563dda83be25\": rpc error: code = NotFound desc = an error occurred when try to find container \"af8591aea898ef949141e6e341ab45fc3d2daaa9f55fd4da0f26563dda83be25\": not found" Jan 29 16:26:27.490840 containerd[1909]: time="2025-01-29T16:26:27.490402107Z" level=error msg="ContainerStatus for \"af8591aea898ef949141e6e341ab45fc3d2daaa9f55fd4da0f26563dda83be25\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"af8591aea898ef949141e6e341ab45fc3d2daaa9f55fd4da0f26563dda83be25\": not found" Jan 29 16:26:27.559230 kubelet[2371]: I0129 16:26:27.559182 2371 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3c290ef4-c5de-4aff-9c50-00258865ebc3-clustermesh-secrets\") on node \"172.31.23.123\" DevicePath \"\"" Jan 29 16:26:27.559230 kubelet[2371]: I0129 16:26:27.559227 2371 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3c290ef4-c5de-4aff-9c50-00258865ebc3-host-proc-sys-net\") on node \"172.31.23.123\" DevicePath \"\"" Jan 29 16:26:27.559230 kubelet[2371]: I0129 16:26:27.559238 2371 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3c290ef4-c5de-4aff-9c50-00258865ebc3-cilium-cgroup\") on node \"172.31.23.123\" DevicePath \"\"" Jan 29 16:26:27.559523 kubelet[2371]: I0129 16:26:27.559250 2371 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3c290ef4-c5de-4aff-9c50-00258865ebc3-hubble-tls\") on node \"172.31.23.123\" DevicePath \"\"" Jan 29 16:26:27.559523 kubelet[2371]: I0129 16:26:27.559264 2371 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3c290ef4-c5de-4aff-9c50-00258865ebc3-bpf-maps\") on node \"172.31.23.123\" DevicePath \"\"" Jan 29 16:26:27.559523 kubelet[2371]: I0129 16:26:27.559277 2371 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3c290ef4-c5de-4aff-9c50-00258865ebc3-cilium-config-path\") on node \"172.31.23.123\" DevicePath \"\"" Jan 29 16:26:27.559523 kubelet[2371]: I0129 16:26:27.559289 2371 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3c290ef4-c5de-4aff-9c50-00258865ebc3-host-proc-sys-kernel\") on node \"172.31.23.123\" DevicePath \"\"" Jan 29 16:26:27.559523 kubelet[2371]: I0129 16:26:27.559299 2371 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c290ef4-c5de-4aff-9c50-00258865ebc3-xtables-lock\") on node \"172.31.23.123\" DevicePath \"\"" Jan 29 16:26:27.559523 kubelet[2371]: I0129 16:26:27.559309 2371 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3c290ef4-c5de-4aff-9c50-00258865ebc3-hostproc\") on node \"172.31.23.123\" DevicePath \"\"" Jan 29 16:26:27.559523 kubelet[2371]: I0129 16:26:27.559318 2371 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-sjtvl\" (UniqueName: \"kubernetes.io/projected/3c290ef4-c5de-4aff-9c50-00258865ebc3-kube-api-access-sjtvl\") on node \"172.31.23.123\" DevicePath \"\"" Jan 29 16:26:27.559523 kubelet[2371]: I0129 16:26:27.559354 2371 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3c290ef4-c5de-4aff-9c50-00258865ebc3-cilium-run\") on node \"172.31.23.123\" DevicePath \"\"" Jan 29 16:26:27.559847 kubelet[2371]: I0129 16:26:27.559367 2371 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3c290ef4-c5de-4aff-9c50-00258865ebc3-etc-cni-netd\") on node \"172.31.23.123\" DevicePath \"\"" Jan 29 16:26:27.559847 kubelet[2371]: I0129 16:26:27.559378 2371 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c290ef4-c5de-4aff-9c50-00258865ebc3-lib-modules\") on node \"172.31.23.123\" DevicePath \"\"" Jan 29 16:26:27.559847 kubelet[2371]: I0129 16:26:27.559389 2371 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3c290ef4-c5de-4aff-9c50-00258865ebc3-cni-path\") on node \"172.31.23.123\" DevicePath \"\"" Jan 29 16:26:27.713336 systemd[1]: Removed slice kubepods-burstable-pod3c290ef4_c5de_4aff_9c50_00258865ebc3.slice - libcontainer container kubepods-burstable-pod3c290ef4_c5de_4aff_9c50_00258865ebc3.slice. Jan 29 16:26:27.713477 systemd[1]: kubepods-burstable-pod3c290ef4_c5de_4aff_9c50_00258865ebc3.slice: Consumed 8.302s CPU time, 126.9M memory peak, 800K read from disk, 13.3M written to disk. Jan 29 16:26:28.014995 systemd[1]: var-lib-kubelet-pods-3c290ef4\x2dc5de\x2d4aff\x2d9c50\x2d00258865ebc3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsjtvl.mount: Deactivated successfully. Jan 29 16:26:28.015213 systemd[1]: var-lib-kubelet-pods-3c290ef4\x2dc5de\x2d4aff\x2d9c50\x2d00258865ebc3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 16:26:28.015320 systemd[1]: var-lib-kubelet-pods-3c290ef4\x2dc5de\x2d4aff\x2d9c50\x2d00258865ebc3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 16:26:28.052736 kubelet[2371]: E0129 16:26:28.052677 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:28.883346 kubelet[2371]: I0129 16:26:28.883088 2371 setters.go:580] "Node became not ready" node="172.31.23.123" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-29T16:26:28Z","lastTransitionTime":"2025-01-29T16:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 29 16:26:29.053730 kubelet[2371]: E0129 16:26:29.053663 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:29.186498 kubelet[2371]: I0129 16:26:29.186385 2371 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c290ef4-c5de-4aff-9c50-00258865ebc3" path="/var/lib/kubelet/pods/3c290ef4-c5de-4aff-9c50-00258865ebc3/volumes" Jan 29 16:26:29.933708 ntpd[1877]: Deleting interface #11 lxc_health, fe80::d87b:26ff:fee2:d4ee%7#123, interface stats: received=0, sent=0, dropped=0, active_time=38 secs Jan 29 16:26:29.934039 ntpd[1877]: 29 Jan 16:26:29 ntpd[1877]: Deleting interface #11 lxc_health, fe80::d87b:26ff:fee2:d4ee%7#123, interface stats: received=0, sent=0, dropped=0, active_time=38 secs Jan 29 16:26:30.054867 kubelet[2371]: E0129 16:26:30.054820 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:31.055862 kubelet[2371]: E0129 16:26:31.055792 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:31.075864 kubelet[2371]: I0129 16:26:31.075806 2371 topology_manager.go:215] "Topology Admit Handler" podUID="ad74aa82-387f-401b-a561-4b51553636cb" podNamespace="kube-system" podName="cilium-operator-599987898-mhd67" Jan 29 16:26:31.076028 kubelet[2371]: E0129 16:26:31.075883 2371 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3c290ef4-c5de-4aff-9c50-00258865ebc3" containerName="clean-cilium-state" Jan 29 16:26:31.076028 kubelet[2371]: E0129 16:26:31.075895 2371 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3c290ef4-c5de-4aff-9c50-00258865ebc3" containerName="apply-sysctl-overwrites" Jan 29 16:26:31.076028 kubelet[2371]: E0129 16:26:31.075903 2371 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3c290ef4-c5de-4aff-9c50-00258865ebc3" containerName="cilium-agent" Jan 29 16:26:31.076028 kubelet[2371]: E0129 16:26:31.075911 2371 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3c290ef4-c5de-4aff-9c50-00258865ebc3" containerName="mount-cgroup" Jan 29 16:26:31.076028 kubelet[2371]: E0129 16:26:31.075919 2371 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3c290ef4-c5de-4aff-9c50-00258865ebc3" containerName="mount-bpf-fs" Jan 29 16:26:31.080441 kubelet[2371]: I0129 16:26:31.080402 2371 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c290ef4-c5de-4aff-9c50-00258865ebc3" containerName="cilium-agent" Jan 29 16:26:31.086871 systemd[1]: Created slice kubepods-besteffort-podad74aa82_387f_401b_a561_4b51553636cb.slice - libcontainer container kubepods-besteffort-podad74aa82_387f_401b_a561_4b51553636cb.slice. Jan 29 16:26:31.102346 kubelet[2371]: W0129 16:26:31.102300 2371 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.31.23.123" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.31.23.123' and this object Jan 29 16:26:31.102488 kubelet[2371]: E0129 16:26:31.102351 2371 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.31.23.123" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.31.23.123' and this object Jan 29 16:26:31.181789 kubelet[2371]: I0129 16:26:31.181732 2371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ad74aa82-387f-401b-a561-4b51553636cb-cilium-config-path\") pod \"cilium-operator-599987898-mhd67\" (UID: \"ad74aa82-387f-401b-a561-4b51553636cb\") " pod="kube-system/cilium-operator-599987898-mhd67" Jan 29 16:26:31.181789 kubelet[2371]: I0129 16:26:31.181789 2371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6wsn\" (UniqueName: \"kubernetes.io/projected/ad74aa82-387f-401b-a561-4b51553636cb-kube-api-access-q6wsn\") pod \"cilium-operator-599987898-mhd67\" (UID: \"ad74aa82-387f-401b-a561-4b51553636cb\") " pod="kube-system/cilium-operator-599987898-mhd67" Jan 29 16:26:31.243806 kubelet[2371]: I0129 16:26:31.242994 2371 topology_manager.go:215] "Topology Admit Handler" podUID="7f225029-7cf3-4f20-8004-32fca9fb4bfa" podNamespace="kube-system" podName="cilium-k29v7" Jan 29 16:26:31.259605 systemd[1]: Created slice kubepods-burstable-pod7f225029_7cf3_4f20_8004_32fca9fb4bfa.slice - libcontainer container kubepods-burstable-pod7f225029_7cf3_4f20_8004_32fca9fb4bfa.slice. Jan 29 16:26:31.269731 kubelet[2371]: W0129 16:26:31.269698 2371 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:172.31.23.123" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.23.123' and this object Jan 29 16:26:31.269731 kubelet[2371]: E0129 16:26:31.269734 2371 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:172.31.23.123" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.23.123' and this object Jan 29 16:26:31.283833 kubelet[2371]: W0129 16:26:31.283804 2371 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:172.31.23.123" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.23.123' and this object Jan 29 16:26:31.283977 kubelet[2371]: E0129 16:26:31.283848 2371 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:172.31.23.123" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.23.123' and this object Jan 29 16:26:31.283977 kubelet[2371]: W0129 16:26:31.283804 2371 reflector.go:547] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:172.31.23.123" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.23.123' and this object Jan 29 16:26:31.283977 kubelet[2371]: E0129 16:26:31.283871 2371 reflector.go:150] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:172.31.23.123" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.23.123' and this object Jan 29 16:26:31.383370 kubelet[2371]: I0129 16:26:31.383225 2371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7f225029-7cf3-4f20-8004-32fca9fb4bfa-cilium-run\") pod \"cilium-k29v7\" (UID: \"7f225029-7cf3-4f20-8004-32fca9fb4bfa\") " pod="kube-system/cilium-k29v7" Jan 29 16:26:31.383370 kubelet[2371]: I0129 16:26:31.383272 2371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7f225029-7cf3-4f20-8004-32fca9fb4bfa-etc-cni-netd\") pod \"cilium-k29v7\" (UID: \"7f225029-7cf3-4f20-8004-32fca9fb4bfa\") " pod="kube-system/cilium-k29v7" Jan 29 16:26:31.383370 kubelet[2371]: I0129 16:26:31.383299 2371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7f225029-7cf3-4f20-8004-32fca9fb4bfa-cilium-ipsec-secrets\") pod \"cilium-k29v7\" (UID: \"7f225029-7cf3-4f20-8004-32fca9fb4bfa\") " pod="kube-system/cilium-k29v7" Jan 29 16:26:31.383370 kubelet[2371]: I0129 16:26:31.383321 2371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdgq4\" (UniqueName: \"kubernetes.io/projected/7f225029-7cf3-4f20-8004-32fca9fb4bfa-kube-api-access-hdgq4\") pod \"cilium-k29v7\" (UID: \"7f225029-7cf3-4f20-8004-32fca9fb4bfa\") " pod="kube-system/cilium-k29v7" Jan 29 16:26:31.383370 kubelet[2371]: I0129 16:26:31.383349 2371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7f225029-7cf3-4f20-8004-32fca9fb4bfa-bpf-maps\") pod \"cilium-k29v7\" (UID: \"7f225029-7cf3-4f20-8004-32fca9fb4bfa\") " pod="kube-system/cilium-k29v7" Jan 29 16:26:31.383370 kubelet[2371]: I0129 16:26:31.383370 2371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7f225029-7cf3-4f20-8004-32fca9fb4bfa-host-proc-sys-net\") pod \"cilium-k29v7\" (UID: \"7f225029-7cf3-4f20-8004-32fca9fb4bfa\") " pod="kube-system/cilium-k29v7" Jan 29 16:26:31.383726 kubelet[2371]: I0129 16:26:31.383403 2371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7f225029-7cf3-4f20-8004-32fca9fb4bfa-hostproc\") pod \"cilium-k29v7\" (UID: \"7f225029-7cf3-4f20-8004-32fca9fb4bfa\") " pod="kube-system/cilium-k29v7" Jan 29 16:26:31.383726 kubelet[2371]: I0129 16:26:31.383425 2371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f225029-7cf3-4f20-8004-32fca9fb4bfa-xtables-lock\") pod \"cilium-k29v7\" (UID: \"7f225029-7cf3-4f20-8004-32fca9fb4bfa\") " pod="kube-system/cilium-k29v7" Jan 29 16:26:31.383726 kubelet[2371]: I0129 16:26:31.383448 2371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f225029-7cf3-4f20-8004-32fca9fb4bfa-cilium-config-path\") pod \"cilium-k29v7\" (UID: \"7f225029-7cf3-4f20-8004-32fca9fb4bfa\") " pod="kube-system/cilium-k29v7" Jan 29 16:26:31.383726 kubelet[2371]: I0129 16:26:31.383477 2371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7f225029-7cf3-4f20-8004-32fca9fb4bfa-cilium-cgroup\") pod \"cilium-k29v7\" (UID: \"7f225029-7cf3-4f20-8004-32fca9fb4bfa\") " pod="kube-system/cilium-k29v7" Jan 29 16:26:31.383726 kubelet[2371]: I0129 16:26:31.383499 2371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7f225029-7cf3-4f20-8004-32fca9fb4bfa-cni-path\") pod \"cilium-k29v7\" (UID: \"7f225029-7cf3-4f20-8004-32fca9fb4bfa\") " pod="kube-system/cilium-k29v7" Jan 29 16:26:31.383726 kubelet[2371]: I0129 16:26:31.383523 2371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f225029-7cf3-4f20-8004-32fca9fb4bfa-lib-modules\") pod \"cilium-k29v7\" (UID: \"7f225029-7cf3-4f20-8004-32fca9fb4bfa\") " pod="kube-system/cilium-k29v7" Jan 29 16:26:31.383943 kubelet[2371]: I0129 16:26:31.383548 2371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7f225029-7cf3-4f20-8004-32fca9fb4bfa-clustermesh-secrets\") pod \"cilium-k29v7\" (UID: \"7f225029-7cf3-4f20-8004-32fca9fb4bfa\") " pod="kube-system/cilium-k29v7" Jan 29 16:26:31.383943 kubelet[2371]: I0129 16:26:31.383588 2371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7f225029-7cf3-4f20-8004-32fca9fb4bfa-host-proc-sys-kernel\") pod \"cilium-k29v7\" (UID: \"7f225029-7cf3-4f20-8004-32fca9fb4bfa\") " pod="kube-system/cilium-k29v7" Jan 29 16:26:31.383943 kubelet[2371]: I0129 16:26:31.383613 2371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7f225029-7cf3-4f20-8004-32fca9fb4bfa-hubble-tls\") pod \"cilium-k29v7\" (UID: \"7f225029-7cf3-4f20-8004-32fca9fb4bfa\") " pod="kube-system/cilium-k29v7" Jan 29 16:26:32.056414 kubelet[2371]: E0129 16:26:32.056373 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:32.193382 kubelet[2371]: E0129 16:26:32.193306 2371 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 16:26:32.283057 kubelet[2371]: E0129 16:26:32.283018 2371 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jan 29 16:26:32.283198 kubelet[2371]: E0129 16:26:32.283124 2371 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ad74aa82-387f-401b-a561-4b51553636cb-cilium-config-path podName:ad74aa82-387f-401b-a561-4b51553636cb nodeName:}" failed. No retries permitted until 2025-01-29 16:26:32.78309858 +0000 UTC m=+66.302546414 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/ad74aa82-387f-401b-a561-4b51553636cb-cilium-config-path") pod "cilium-operator-599987898-mhd67" (UID: "ad74aa82-387f-401b-a561-4b51553636cb") : failed to sync configmap cache: timed out waiting for the condition Jan 29 16:26:32.489213 kubelet[2371]: E0129 16:26:32.489076 2371 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jan 29 16:26:32.489213 kubelet[2371]: E0129 16:26:32.489126 2371 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-k29v7: failed to sync secret cache: timed out waiting for the condition Jan 29 16:26:32.489376 kubelet[2371]: E0129 16:26:32.489217 2371 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f225029-7cf3-4f20-8004-32fca9fb4bfa-hubble-tls podName:7f225029-7cf3-4f20-8004-32fca9fb4bfa nodeName:}" failed. No retries permitted until 2025-01-29 16:26:32.989192572 +0000 UTC m=+66.508640409 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/7f225029-7cf3-4f20-8004-32fca9fb4bfa-hubble-tls") pod "cilium-k29v7" (UID: "7f225029-7cf3-4f20-8004-32fca9fb4bfa") : failed to sync secret cache: timed out waiting for the condition Jan 29 16:26:32.889830 containerd[1909]: time="2025-01-29T16:26:32.889755519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-mhd67,Uid:ad74aa82-387f-401b-a561-4b51553636cb,Namespace:kube-system,Attempt:0,}" Jan 29 16:26:32.926411 containerd[1909]: time="2025-01-29T16:26:32.926210269Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:26:32.926411 containerd[1909]: time="2025-01-29T16:26:32.926258324Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:26:32.926411 containerd[1909]: time="2025-01-29T16:26:32.926271831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:32.926411 containerd[1909]: time="2025-01-29T16:26:32.926360809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:32.958809 systemd[1]: Started cri-containerd-3818ca791bdcebeaa1e6f4c60255246963c77597635e0e01d9c9edd26b1ca1e3.scope - libcontainer container 3818ca791bdcebeaa1e6f4c60255246963c77597635e0e01d9c9edd26b1ca1e3. Jan 29 16:26:33.015133 containerd[1909]: time="2025-01-29T16:26:33.014813426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-mhd67,Uid:ad74aa82-387f-401b-a561-4b51553636cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"3818ca791bdcebeaa1e6f4c60255246963c77597635e0e01d9c9edd26b1ca1e3\"" Jan 29 16:26:33.016797 containerd[1909]: time="2025-01-29T16:26:33.016773233Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 16:26:33.056872 kubelet[2371]: E0129 16:26:33.056815 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:33.077912 containerd[1909]: time="2025-01-29T16:26:33.077867639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k29v7,Uid:7f225029-7cf3-4f20-8004-32fca9fb4bfa,Namespace:kube-system,Attempt:0,}" Jan 29 16:26:33.113107 containerd[1909]: time="2025-01-29T16:26:33.112368596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:26:33.113107 containerd[1909]: time="2025-01-29T16:26:33.112443002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:26:33.113107 containerd[1909]: time="2025-01-29T16:26:33.112468346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:33.113107 containerd[1909]: time="2025-01-29T16:26:33.112562082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:26:33.136782 systemd[1]: Started cri-containerd-deac393ad68262b533af707887922299b635b8d5b2249d8c0f4f466bf0cd6713.scope - libcontainer container deac393ad68262b533af707887922299b635b8d5b2249d8c0f4f466bf0cd6713. Jan 29 16:26:33.173693 containerd[1909]: time="2025-01-29T16:26:33.172382253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k29v7,Uid:7f225029-7cf3-4f20-8004-32fca9fb4bfa,Namespace:kube-system,Attempt:0,} returns sandbox id \"deac393ad68262b533af707887922299b635b8d5b2249d8c0f4f466bf0cd6713\"" Jan 29 16:26:33.176479 containerd[1909]: time="2025-01-29T16:26:33.176455122Z" level=info msg="CreateContainer within sandbox \"deac393ad68262b533af707887922299b635b8d5b2249d8c0f4f466bf0cd6713\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 16:26:33.197323 containerd[1909]: time="2025-01-29T16:26:33.197270391Z" level=info msg="CreateContainer within sandbox \"deac393ad68262b533af707887922299b635b8d5b2249d8c0f4f466bf0cd6713\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7b6ad840a1bacfeb5fec5fe1e3aeefcdabcfa9117fa593171963b20249e43389\"" Jan 29 16:26:33.198865 containerd[1909]: time="2025-01-29T16:26:33.197897352Z" level=info msg="StartContainer for \"7b6ad840a1bacfeb5fec5fe1e3aeefcdabcfa9117fa593171963b20249e43389\"" Jan 29 16:26:33.243221 systemd[1]: Started cri-containerd-7b6ad840a1bacfeb5fec5fe1e3aeefcdabcfa9117fa593171963b20249e43389.scope - libcontainer container 7b6ad840a1bacfeb5fec5fe1e3aeefcdabcfa9117fa593171963b20249e43389. Jan 29 16:26:33.311603 containerd[1909]: time="2025-01-29T16:26:33.311542917Z" level=info msg="StartContainer for \"7b6ad840a1bacfeb5fec5fe1e3aeefcdabcfa9117fa593171963b20249e43389\" returns successfully" Jan 29 16:26:33.345434 systemd[1]: cri-containerd-7b6ad840a1bacfeb5fec5fe1e3aeefcdabcfa9117fa593171963b20249e43389.scope: Deactivated successfully. Jan 29 16:26:33.346211 systemd[1]: cri-containerd-7b6ad840a1bacfeb5fec5fe1e3aeefcdabcfa9117fa593171963b20249e43389.scope: Consumed 24ms CPU time, 9.6M memory peak, 3.3M read from disk. Jan 29 16:26:33.418605 containerd[1909]: time="2025-01-29T16:26:33.418100755Z" level=info msg="shim disconnected" id=7b6ad840a1bacfeb5fec5fe1e3aeefcdabcfa9117fa593171963b20249e43389 namespace=k8s.io Jan 29 16:26:33.418605 containerd[1909]: time="2025-01-29T16:26:33.418245229Z" level=warning msg="cleaning up after shim disconnected" id=7b6ad840a1bacfeb5fec5fe1e3aeefcdabcfa9117fa593171963b20249e43389 namespace=k8s.io Jan 29 16:26:33.418605 containerd[1909]: time="2025-01-29T16:26:33.418259399Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:26:34.057909 kubelet[2371]: E0129 16:26:34.057853 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:34.432392 containerd[1909]: time="2025-01-29T16:26:34.432284882Z" level=info msg="CreateContainer within sandbox \"deac393ad68262b533af707887922299b635b8d5b2249d8c0f4f466bf0cd6713\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 16:26:34.475468 containerd[1909]: time="2025-01-29T16:26:34.475404242Z" level=info msg="CreateContainer within sandbox \"deac393ad68262b533af707887922299b635b8d5b2249d8c0f4f466bf0cd6713\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e57078a613207be5987c183a26187535b07fed3a76758179797f1769e60ee4eb\"" Jan 29 16:26:34.476233 containerd[1909]: time="2025-01-29T16:26:34.476152356Z" level=info msg="StartContainer for \"e57078a613207be5987c183a26187535b07fed3a76758179797f1769e60ee4eb\"" Jan 29 16:26:34.523057 systemd[1]: Started cri-containerd-e57078a613207be5987c183a26187535b07fed3a76758179797f1769e60ee4eb.scope - libcontainer container e57078a613207be5987c183a26187535b07fed3a76758179797f1769e60ee4eb. Jan 29 16:26:34.571378 containerd[1909]: time="2025-01-29T16:26:34.571329284Z" level=info msg="StartContainer for \"e57078a613207be5987c183a26187535b07fed3a76758179797f1769e60ee4eb\" returns successfully" Jan 29 16:26:34.583798 systemd[1]: cri-containerd-e57078a613207be5987c183a26187535b07fed3a76758179797f1769e60ee4eb.scope: Deactivated successfully. Jan 29 16:26:34.584186 systemd[1]: cri-containerd-e57078a613207be5987c183a26187535b07fed3a76758179797f1769e60ee4eb.scope: Consumed 20ms CPU time, 7.7M memory peak, 2.2M read from disk. Jan 29 16:26:34.627457 containerd[1909]: time="2025-01-29T16:26:34.627326858Z" level=info msg="shim disconnected" id=e57078a613207be5987c183a26187535b07fed3a76758179797f1769e60ee4eb namespace=k8s.io Jan 29 16:26:34.627457 containerd[1909]: time="2025-01-29T16:26:34.627448912Z" level=warning msg="cleaning up after shim disconnected" id=e57078a613207be5987c183a26187535b07fed3a76758179797f1769e60ee4eb namespace=k8s.io Jan 29 16:26:34.627457 containerd[1909]: time="2025-01-29T16:26:34.627466026Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:26:34.908100 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e57078a613207be5987c183a26187535b07fed3a76758179797f1769e60ee4eb-rootfs.mount: Deactivated successfully. Jan 29 16:26:35.059047 kubelet[2371]: E0129 16:26:35.058986 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:35.441148 containerd[1909]: time="2025-01-29T16:26:35.441107135Z" level=info msg="CreateContainer within sandbox \"deac393ad68262b533af707887922299b635b8d5b2249d8c0f4f466bf0cd6713\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 16:26:35.471314 containerd[1909]: time="2025-01-29T16:26:35.471259708Z" level=info msg="CreateContainer within sandbox \"deac393ad68262b533af707887922299b635b8d5b2249d8c0f4f466bf0cd6713\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f8948cbc22918a4637328b16786dd5eb298ffe9fe5cbc7c50c4a2c9aacc6ae71\"" Jan 29 16:26:35.472122 containerd[1909]: time="2025-01-29T16:26:35.472079806Z" level=info msg="StartContainer for \"f8948cbc22918a4637328b16786dd5eb298ffe9fe5cbc7c50c4a2c9aacc6ae71\"" Jan 29 16:26:35.523788 systemd[1]: Started cri-containerd-f8948cbc22918a4637328b16786dd5eb298ffe9fe5cbc7c50c4a2c9aacc6ae71.scope - libcontainer container f8948cbc22918a4637328b16786dd5eb298ffe9fe5cbc7c50c4a2c9aacc6ae71. Jan 29 16:26:35.590105 containerd[1909]: time="2025-01-29T16:26:35.588338621Z" level=info msg="StartContainer for \"f8948cbc22918a4637328b16786dd5eb298ffe9fe5cbc7c50c4a2c9aacc6ae71\" returns successfully" Jan 29 16:26:35.598067 systemd[1]: cri-containerd-f8948cbc22918a4637328b16786dd5eb298ffe9fe5cbc7c50c4a2c9aacc6ae71.scope: Deactivated successfully. Jan 29 16:26:35.644811 containerd[1909]: time="2025-01-29T16:26:35.644706900Z" level=info msg="shim disconnected" id=f8948cbc22918a4637328b16786dd5eb298ffe9fe5cbc7c50c4a2c9aacc6ae71 namespace=k8s.io Jan 29 16:26:35.644811 containerd[1909]: time="2025-01-29T16:26:35.644784453Z" level=warning msg="cleaning up after shim disconnected" id=f8948cbc22918a4637328b16786dd5eb298ffe9fe5cbc7c50c4a2c9aacc6ae71 namespace=k8s.io Jan 29 16:26:35.644811 containerd[1909]: time="2025-01-29T16:26:35.644798949Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:26:35.908133 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8948cbc22918a4637328b16786dd5eb298ffe9fe5cbc7c50c4a2c9aacc6ae71-rootfs.mount: Deactivated successfully. Jan 29 16:26:36.062080 kubelet[2371]: E0129 16:26:36.060091 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:36.079373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount804476047.mount: Deactivated successfully. Jan 29 16:26:36.455748 containerd[1909]: time="2025-01-29T16:26:36.455561832Z" level=info msg="CreateContainer within sandbox \"deac393ad68262b533af707887922299b635b8d5b2249d8c0f4f466bf0cd6713\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 16:26:36.488551 containerd[1909]: time="2025-01-29T16:26:36.488100284Z" level=info msg="CreateContainer within sandbox \"deac393ad68262b533af707887922299b635b8d5b2249d8c0f4f466bf0cd6713\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d561dfaf169003ee4dc93d0de0ff71c71fc4e8cdbc53b2a1b77c54df91ac99fb\"" Jan 29 16:26:36.489909 containerd[1909]: time="2025-01-29T16:26:36.489520238Z" level=info msg="StartContainer for \"d561dfaf169003ee4dc93d0de0ff71c71fc4e8cdbc53b2a1b77c54df91ac99fb\"" Jan 29 16:26:36.533038 systemd[1]: Started cri-containerd-d561dfaf169003ee4dc93d0de0ff71c71fc4e8cdbc53b2a1b77c54df91ac99fb.scope - libcontainer container d561dfaf169003ee4dc93d0de0ff71c71fc4e8cdbc53b2a1b77c54df91ac99fb. Jan 29 16:26:36.599752 systemd[1]: cri-containerd-d561dfaf169003ee4dc93d0de0ff71c71fc4e8cdbc53b2a1b77c54df91ac99fb.scope: Deactivated successfully. Jan 29 16:26:36.605339 containerd[1909]: time="2025-01-29T16:26:36.605144609Z" level=info msg="StartContainer for \"d561dfaf169003ee4dc93d0de0ff71c71fc4e8cdbc53b2a1b77c54df91ac99fb\" returns successfully" Jan 29 16:26:36.607955 containerd[1909]: time="2025-01-29T16:26:36.606757402Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f225029_7cf3_4f20_8004_32fca9fb4bfa.slice/cri-containerd-d561dfaf169003ee4dc93d0de0ff71c71fc4e8cdbc53b2a1b77c54df91ac99fb.scope/memory.events\": no such file or directory" Jan 29 16:26:36.679828 containerd[1909]: time="2025-01-29T16:26:36.679491307Z" level=info msg="shim disconnected" id=d561dfaf169003ee4dc93d0de0ff71c71fc4e8cdbc53b2a1b77c54df91ac99fb namespace=k8s.io Jan 29 16:26:36.679828 containerd[1909]: time="2025-01-29T16:26:36.679553317Z" level=warning msg="cleaning up after shim disconnected" id=d561dfaf169003ee4dc93d0de0ff71c71fc4e8cdbc53b2a1b77c54df91ac99fb namespace=k8s.io Jan 29 16:26:36.679828 containerd[1909]: time="2025-01-29T16:26:36.679564557Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:26:36.710833 containerd[1909]: time="2025-01-29T16:26:36.709454788Z" level=warning msg="cleanup warnings time=\"2025-01-29T16:26:36Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 16:26:37.060694 kubelet[2371]: E0129 16:26:37.060635 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:37.072647 containerd[1909]: time="2025-01-29T16:26:37.072565738Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:37.074610 containerd[1909]: time="2025-01-29T16:26:37.074528062Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 29 16:26:37.077180 containerd[1909]: time="2025-01-29T16:26:37.077001775Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:26:37.079775 containerd[1909]: time="2025-01-29T16:26:37.079218190Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.062304496s" Jan 29 16:26:37.079775 containerd[1909]: time="2025-01-29T16:26:37.079263318Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 29 16:26:37.083545 containerd[1909]: time="2025-01-29T16:26:37.083507270Z" level=info msg="CreateContainer within sandbox \"3818ca791bdcebeaa1e6f4c60255246963c77597635e0e01d9c9edd26b1ca1e3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 16:26:37.135652 containerd[1909]: time="2025-01-29T16:26:37.135607031Z" level=info msg="CreateContainer within sandbox \"3818ca791bdcebeaa1e6f4c60255246963c77597635e0e01d9c9edd26b1ca1e3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1c93f5099d054aebd8593f36c68fe3aad7afd50eb105529cd31c10f39c6796b6\"" Jan 29 16:26:37.136353 containerd[1909]: time="2025-01-29T16:26:37.136323263Z" level=info msg="StartContainer for \"1c93f5099d054aebd8593f36c68fe3aad7afd50eb105529cd31c10f39c6796b6\"" Jan 29 16:26:37.137373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2983328497.mount: Deactivated successfully. Jan 29 16:26:37.195523 kubelet[2371]: E0129 16:26:37.194128 2371 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 16:26:37.194827 systemd[1]: Started cri-containerd-1c93f5099d054aebd8593f36c68fe3aad7afd50eb105529cd31c10f39c6796b6.scope - libcontainer container 1c93f5099d054aebd8593f36c68fe3aad7afd50eb105529cd31c10f39c6796b6. Jan 29 16:26:37.237215 containerd[1909]: time="2025-01-29T16:26:37.236255290Z" level=info msg="StartContainer for \"1c93f5099d054aebd8593f36c68fe3aad7afd50eb105529cd31c10f39c6796b6\" returns successfully" Jan 29 16:26:37.464803 containerd[1909]: time="2025-01-29T16:26:37.464686724Z" level=info msg="CreateContainer within sandbox \"deac393ad68262b533af707887922299b635b8d5b2249d8c0f4f466bf0cd6713\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 16:26:37.524414 kubelet[2371]: I0129 16:26:37.524331 2371 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-mhd67" podStartSLOduration=3.460015539 podStartE2EDuration="7.524313594s" podCreationTimestamp="2025-01-29 16:26:30 +0000 UTC" firstStartedPulling="2025-01-29 16:26:33.016130019 +0000 UTC m=+66.535577846" lastFinishedPulling="2025-01-29 16:26:37.080428077 +0000 UTC m=+70.599875901" observedRunningTime="2025-01-29 16:26:37.524114027 +0000 UTC m=+71.043561870" watchObservedRunningTime="2025-01-29 16:26:37.524313594 +0000 UTC m=+71.043761426" Jan 29 16:26:37.530689 containerd[1909]: time="2025-01-29T16:26:37.530616602Z" level=info msg="CreateContainer within sandbox \"deac393ad68262b533af707887922299b635b8d5b2249d8c0f4f466bf0cd6713\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"911bcbc6d5777f2dd5b60c2008280e49d073d5f38a8ee169aa85ca877a42f205\"" Jan 29 16:26:37.531582 containerd[1909]: time="2025-01-29T16:26:37.531533523Z" level=info msg="StartContainer for \"911bcbc6d5777f2dd5b60c2008280e49d073d5f38a8ee169aa85ca877a42f205\"" Jan 29 16:26:37.577774 systemd[1]: Started cri-containerd-911bcbc6d5777f2dd5b60c2008280e49d073d5f38a8ee169aa85ca877a42f205.scope - libcontainer container 911bcbc6d5777f2dd5b60c2008280e49d073d5f38a8ee169aa85ca877a42f205. Jan 29 16:26:37.617972 containerd[1909]: time="2025-01-29T16:26:37.617924856Z" level=info msg="StartContainer for \"911bcbc6d5777f2dd5b60c2008280e49d073d5f38a8ee169aa85ca877a42f205\" returns successfully" Jan 29 16:26:37.910680 systemd[1]: run-containerd-runc-k8s.io-1c93f5099d054aebd8593f36c68fe3aad7afd50eb105529cd31c10f39c6796b6-runc.TrLH5x.mount: Deactivated successfully. Jan 29 16:26:38.061701 kubelet[2371]: E0129 16:26:38.061616 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:38.377617 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 29 16:26:38.546734 kubelet[2371]: I0129 16:26:38.545989 2371 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-k29v7" podStartSLOduration=7.545969249 podStartE2EDuration="7.545969249s" podCreationTimestamp="2025-01-29 16:26:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:26:38.536601029 +0000 UTC m=+72.056048870" watchObservedRunningTime="2025-01-29 16:26:38.545969249 +0000 UTC m=+72.065417090" Jan 29 16:26:39.062750 kubelet[2371]: E0129 16:26:39.062691 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:40.063320 kubelet[2371]: E0129 16:26:40.063216 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:41.063543 kubelet[2371]: E0129 16:26:41.063487 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:41.844856 systemd-networkd[1741]: lxc_health: Link UP Jan 29 16:26:41.846206 (udev-worker)[5157]: Network interface NamePolicy= disabled on kernel command line. Jan 29 16:26:41.847113 systemd-networkd[1741]: lxc_health: Gained carrier Jan 29 16:26:42.066739 kubelet[2371]: E0129 16:26:42.066683 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:43.067082 kubelet[2371]: E0129 16:26:43.067038 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:43.290822 systemd-networkd[1741]: lxc_health: Gained IPv6LL Jan 29 16:26:43.940384 systemd[1]: run-containerd-runc-k8s.io-911bcbc6d5777f2dd5b60c2008280e49d073d5f38a8ee169aa85ca877a42f205-runc.SHZCaA.mount: Deactivated successfully. Jan 29 16:26:44.068419 kubelet[2371]: E0129 16:26:44.068271 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:45.069115 kubelet[2371]: E0129 16:26:45.069029 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:45.933789 ntpd[1877]: Listen normally on 15 lxc_health [fe80::9048:8cff:fe84:c6a8%15]:123 Jan 29 16:26:45.934327 ntpd[1877]: 29 Jan 16:26:45 ntpd[1877]: Listen normally on 15 lxc_health [fe80::9048:8cff:fe84:c6a8%15]:123 Jan 29 16:26:46.070084 kubelet[2371]: E0129 16:26:46.070020 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:46.381116 systemd[1]: run-containerd-runc-k8s.io-911bcbc6d5777f2dd5b60c2008280e49d073d5f38a8ee169aa85ca877a42f205-runc.eZ82bT.mount: Deactivated successfully. Jan 29 16:26:46.997166 kubelet[2371]: E0129 16:26:46.997090 2371 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:47.070492 kubelet[2371]: E0129 16:26:47.070408 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:48.070997 kubelet[2371]: E0129 16:26:48.070863 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:49.071626 kubelet[2371]: E0129 16:26:49.071529 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:50.072610 kubelet[2371]: E0129 16:26:50.072536 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:51.073050 kubelet[2371]: E0129 16:26:51.073004 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:52.073713 kubelet[2371]: E0129 16:26:52.073662 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:53.074859 kubelet[2371]: E0129 16:26:53.074801 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:54.076054 kubelet[2371]: E0129 16:26:54.075946 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:55.077273 kubelet[2371]: E0129 16:26:55.077209 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:56.077691 kubelet[2371]: E0129 16:26:56.077649 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:57.078646 kubelet[2371]: E0129 16:26:57.078593 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:58.079378 kubelet[2371]: E0129 16:26:58.079322 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:26:59.080060 kubelet[2371]: E0129 16:26:59.080012 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:00.080210 kubelet[2371]: E0129 16:27:00.080159 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:01.080394 kubelet[2371]: E0129 16:27:01.080323 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:02.081449 kubelet[2371]: E0129 16:27:02.081388 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:03.082299 kubelet[2371]: E0129 16:27:03.082174 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:04.083024 kubelet[2371]: E0129 16:27:04.082963 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:05.083434 kubelet[2371]: E0129 16:27:05.083376 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:06.084230 kubelet[2371]: E0129 16:27:06.084172 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:06.996795 kubelet[2371]: E0129 16:27:06.996736 2371 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:07.084721 kubelet[2371]: E0129 16:27:07.084671 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:08.085416 kubelet[2371]: E0129 16:27:08.085358 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:08.907651 kubelet[2371]: E0129 16:27:08.907539 2371 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.123?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 16:27:09.086111 kubelet[2371]: E0129 16:27:09.086069 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:10.058730 kubelet[2371]: E0129 16:27:10.058615 2371 kubelet_node_status.go:544] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-01-29T16:27:00Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-01-29T16:27:00Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-01-29T16:27:00Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-01-29T16:27:00Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"],\\\"sizeBytes\\\":166719855},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":91036984},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":71015439},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\\\",\\\"registry.k8s.io/kube-proxy:v1.30.9\\\"],\\\"sizeBytes\\\":29057356},{\\\"names\\\":[\\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"],\\\"sizeBytes\\\":18897442},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":311286}]}}\" for node \"172.31.23.123\": Patch \"https://172.31.23.149:6443/api/v1/nodes/172.31.23.123/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 16:27:10.086240 kubelet[2371]: E0129 16:27:10.086198 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:11.087420 kubelet[2371]: E0129 16:27:11.087365 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:12.088387 kubelet[2371]: E0129 16:27:12.088333 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:13.088671 kubelet[2371]: E0129 16:27:13.088628 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:14.089062 kubelet[2371]: E0129 16:27:14.089002 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:15.089828 kubelet[2371]: E0129 16:27:15.089759 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:16.090893 kubelet[2371]: E0129 16:27:16.090829 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:17.091364 kubelet[2371]: E0129 16:27:17.091304 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:18.092490 kubelet[2371]: E0129 16:27:18.092433 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:18.908708 kubelet[2371]: E0129 16:27:18.908637 2371 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.123?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 16:27:19.093278 kubelet[2371]: E0129 16:27:19.093234 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:20.059812 kubelet[2371]: E0129 16:27:20.059763 2371 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.23.123\": Get \"https://172.31.23.149:6443/api/v1/nodes/172.31.23.123?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 16:27:20.094035 kubelet[2371]: E0129 16:27:20.093979 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:21.094972 kubelet[2371]: E0129 16:27:21.094926 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:22.095853 kubelet[2371]: E0129 16:27:22.095795 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:23.096658 kubelet[2371]: E0129 16:27:23.096602 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:24.097490 kubelet[2371]: E0129 16:27:24.097431 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:25.098087 kubelet[2371]: E0129 16:27:25.098020 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:26.099187 kubelet[2371]: E0129 16:27:26.099125 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:26.996813 kubelet[2371]: E0129 16:27:26.996760 2371 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:27.063852 containerd[1909]: time="2025-01-29T16:27:27.063799468Z" level=info msg="StopPodSandbox for \"da1739ae20c996da96d9b53ccc9f9387fd57152a2f7fdb978ba4b856ad1a7227\"" Jan 29 16:27:27.064305 containerd[1909]: time="2025-01-29T16:27:27.063906222Z" level=info msg="TearDown network for sandbox \"da1739ae20c996da96d9b53ccc9f9387fd57152a2f7fdb978ba4b856ad1a7227\" successfully" Jan 29 16:27:27.064305 containerd[1909]: time="2025-01-29T16:27:27.063964053Z" level=info msg="StopPodSandbox for \"da1739ae20c996da96d9b53ccc9f9387fd57152a2f7fdb978ba4b856ad1a7227\" returns successfully" Jan 29 16:27:27.064700 containerd[1909]: time="2025-01-29T16:27:27.064665663Z" level=info msg="RemovePodSandbox for \"da1739ae20c996da96d9b53ccc9f9387fd57152a2f7fdb978ba4b856ad1a7227\"" Jan 29 16:27:27.064799 containerd[1909]: time="2025-01-29T16:27:27.064709833Z" level=info msg="Forcibly stopping sandbox \"da1739ae20c996da96d9b53ccc9f9387fd57152a2f7fdb978ba4b856ad1a7227\"" Jan 29 16:27:27.064799 containerd[1909]: time="2025-01-29T16:27:27.064774988Z" level=info msg="TearDown network for sandbox \"da1739ae20c996da96d9b53ccc9f9387fd57152a2f7fdb978ba4b856ad1a7227\" successfully" Jan 29 16:27:27.083109 containerd[1909]: time="2025-01-29T16:27:27.081507543Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"da1739ae20c996da96d9b53ccc9f9387fd57152a2f7fdb978ba4b856ad1a7227\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:27:27.083109 containerd[1909]: time="2025-01-29T16:27:27.081610644Z" level=info msg="RemovePodSandbox \"da1739ae20c996da96d9b53ccc9f9387fd57152a2f7fdb978ba4b856ad1a7227\" returns successfully" Jan 29 16:27:27.100730 kubelet[2371]: E0129 16:27:27.100645 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:28.101436 kubelet[2371]: E0129 16:27:28.101390 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:28.909207 kubelet[2371]: E0129 16:27:28.909156 2371 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.123?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 16:27:29.101597 kubelet[2371]: E0129 16:27:29.101530 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:30.060103 kubelet[2371]: E0129 16:27:30.060042 2371 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.23.123\": Get \"https://172.31.23.149:6443/api/v1/nodes/172.31.23.123?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 16:27:30.102257 kubelet[2371]: E0129 16:27:30.102193 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:31.102458 kubelet[2371]: E0129 16:27:31.102411 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:32.103225 kubelet[2371]: E0129 16:27:32.103170 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:33.103806 kubelet[2371]: E0129 16:27:33.103750 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:34.104867 kubelet[2371]: E0129 16:27:34.104811 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:35.106043 kubelet[2371]: E0129 16:27:35.105973 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:36.106686 kubelet[2371]: E0129 16:27:36.106631 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:36.917774 kubelet[2371]: E0129 16:27:36.917731 2371 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.123?timeout=10s\": unexpected EOF" Jan 29 16:27:36.922507 kubelet[2371]: E0129 16:27:36.922174 2371 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.123?timeout=10s\": read tcp 172.31.23.123:52404->172.31.23.149:6443: read: connection reset by peer" Jan 29 16:27:36.922507 kubelet[2371]: I0129 16:27:36.922216 2371 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 29 16:27:36.922962 kubelet[2371]: E0129 16:27:36.922888 2371 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.123?timeout=10s\": dial tcp 172.31.23.149:6443: connect: connection refused" interval="200ms" Jan 29 16:27:37.107804 kubelet[2371]: E0129 16:27:37.107743 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:37.125347 kubelet[2371]: E0129 16:27:37.125236 2371 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.123?timeout=10s\": dial tcp 172.31.23.149:6443: connect: connection refused" interval="400ms" Jan 29 16:27:37.526945 kubelet[2371]: E0129 16:27:37.526883 2371 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.123?timeout=10s\": dial tcp 172.31.23.149:6443: connect: connection refused" interval="800ms" Jan 29 16:27:37.918901 kubelet[2371]: E0129 16:27:37.918775 2371 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.23.123\": Get \"https://172.31.23.149:6443/api/v1/nodes/172.31.23.123?timeout=10s\": dial tcp 172.31.23.149:6443: connect: connection refused - error from a previous attempt: unexpected EOF" Jan 29 16:27:37.919602 kubelet[2371]: E0129 16:27:37.919552 2371 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.23.123\": Get \"https://172.31.23.149:6443/api/v1/nodes/172.31.23.123?timeout=10s\": dial tcp 172.31.23.149:6443: connect: connection refused" Jan 29 16:27:37.919602 kubelet[2371]: E0129 16:27:37.919598 2371 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count" Jan 29 16:27:38.107987 kubelet[2371]: E0129 16:27:38.107926 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:39.108318 kubelet[2371]: E0129 16:27:39.108256 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:40.108792 kubelet[2371]: E0129 16:27:40.108736 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:41.109143 kubelet[2371]: E0129 16:27:41.109068 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:42.109585 kubelet[2371]: E0129 16:27:42.109516 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:43.110513 kubelet[2371]: E0129 16:27:43.110465 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:44.111168 kubelet[2371]: E0129 16:27:44.111128 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:45.112299 kubelet[2371]: E0129 16:27:45.112243 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:46.112472 kubelet[2371]: E0129 16:27:46.112417 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:46.996695 kubelet[2371]: E0129 16:27:46.996641 2371 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:47.112684 kubelet[2371]: E0129 16:27:47.112617 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:48.113830 kubelet[2371]: E0129 16:27:48.113760 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:48.328544 kubelet[2371]: E0129 16:27:48.328473 2371 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.23.123?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Jan 29 16:27:49.114474 kubelet[2371]: E0129 16:27:49.114408 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:50.115366 kubelet[2371]: E0129 16:27:50.115304 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:51.116096 kubelet[2371]: E0129 16:27:51.116046 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:52.117202 kubelet[2371]: E0129 16:27:52.117148 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:53.117648 kubelet[2371]: E0129 16:27:53.117559 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:54.118110 kubelet[2371]: E0129 16:27:54.118051 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:27:55.118816 kubelet[2371]: E0129 16:27:55.118757 2371 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"