Jan 30 13:53:03.884078 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:53:03.884103 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:53:03.884113 kernel: BIOS-provided physical RAM map: Jan 30 13:53:03.884120 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 13:53:03.884126 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 13:53:03.884132 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 13:53:03.884143 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jan 30 13:53:03.884149 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jan 30 13:53:03.884156 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jan 30 13:53:03.884163 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 13:53:03.884169 kernel: NX (Execute Disable) protection: active Jan 30 13:53:03.884176 kernel: APIC: Static calls initialized Jan 30 13:53:03.884183 kernel: SMBIOS 2.7 present. Jan 30 13:53:03.884190 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 30 13:53:03.884201 kernel: Hypervisor detected: KVM Jan 30 13:53:03.884209 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 13:53:03.884217 kernel: kvm-clock: using sched offset of 5845289758 cycles Jan 30 13:53:03.884225 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 13:53:03.884233 kernel: tsc: Detected 2499.998 MHz processor Jan 30 13:53:03.884241 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:53:03.884249 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:53:03.884259 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jan 30 13:53:03.884267 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 13:53:03.884275 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:53:03.884282 kernel: Using GB pages for direct mapping Jan 30 13:53:03.884290 kernel: ACPI: Early table checksum verification disabled Jan 30 13:53:03.884298 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jan 30 13:53:03.884305 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jan 30 13:53:03.884313 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 30 13:53:03.884321 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 30 13:53:03.884331 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jan 30 13:53:03.884338 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 30 13:53:03.884346 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 30 13:53:03.884354 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 30 13:53:03.884361 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 30 13:53:03.884369 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 30 13:53:03.884377 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 30 13:53:03.884384 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 30 13:53:03.884392 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jan 30 13:53:03.884402 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jan 30 13:53:03.884413 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jan 30 13:53:03.884421 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jan 30 13:53:03.884429 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jan 30 13:53:03.884438 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jan 30 13:53:03.884448 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jan 30 13:53:03.884456 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jan 30 13:53:03.884464 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jan 30 13:53:03.884472 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jan 30 13:53:03.884480 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 13:53:03.884489 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 13:53:03.884497 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 30 13:53:03.884505 kernel: NUMA: Initialized distance table, cnt=1 Jan 30 13:53:03.884513 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jan 30 13:53:03.884523 kernel: Zone ranges: Jan 30 13:53:03.884532 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:53:03.884540 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jan 30 13:53:03.884548 kernel: Normal empty Jan 30 13:53:03.884556 kernel: Movable zone start for each node Jan 30 13:53:03.884564 kernel: Early memory node ranges Jan 30 13:53:03.884572 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 13:53:03.884580 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jan 30 13:53:03.884588 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jan 30 13:53:03.884596 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:53:03.884607 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 13:53:03.884615 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jan 30 13:53:03.884623 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 30 13:53:03.884632 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 13:53:03.884640 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 30 13:53:03.884648 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 13:53:03.884656 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:53:03.884664 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 13:53:03.884672 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 13:53:03.884683 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:53:03.884691 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 13:53:03.884699 kernel: TSC deadline timer available Jan 30 13:53:03.884707 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 13:53:03.884715 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 13:53:03.884724 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 30 13:53:03.884732 kernel: Booting paravirtualized kernel on KVM Jan 30 13:53:03.884740 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:53:03.884748 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 13:53:03.884759 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 13:53:03.884767 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 13:53:03.884775 kernel: pcpu-alloc: [0] 0 1 Jan 30 13:53:03.884783 kernel: kvm-guest: PV spinlocks enabled Jan 30 13:53:03.884791 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 13:53:03.884801 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:53:03.884833 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:53:03.884841 kernel: random: crng init done Jan 30 13:53:03.884852 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:53:03.884860 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 13:53:03.884868 kernel: Fallback order for Node 0: 0 Jan 30 13:53:03.884876 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jan 30 13:53:03.884884 kernel: Policy zone: DMA32 Jan 30 13:53:03.884893 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:53:03.884901 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 125152K reserved, 0K cma-reserved) Jan 30 13:53:03.884909 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 13:53:03.884917 kernel: Kernel/User page tables isolation: enabled Jan 30 13:53:03.884928 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:53:03.884936 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:53:03.884945 kernel: Dynamic Preempt: voluntary Jan 30 13:53:03.884953 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:53:03.884962 kernel: rcu: RCU event tracing is enabled. Jan 30 13:53:03.884970 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 13:53:03.884978 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:53:03.884987 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:53:03.884995 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:53:03.885006 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:53:03.885014 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 13:53:03.885022 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 13:53:03.885030 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:53:03.885038 kernel: Console: colour VGA+ 80x25 Jan 30 13:53:03.885047 kernel: printk: console [ttyS0] enabled Jan 30 13:53:03.885067 kernel: ACPI: Core revision 20230628 Jan 30 13:53:03.885075 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 30 13:53:03.885083 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:53:03.885093 kernel: x2apic enabled Jan 30 13:53:03.885120 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:53:03.885137 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 30 13:53:03.885148 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jan 30 13:53:03.885157 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 30 13:53:03.885166 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 30 13:53:03.885174 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:53:03.885182 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:53:03.885191 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:53:03.885199 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:53:03.885208 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 30 13:53:03.885217 kernel: RETBleed: Vulnerable Jan 30 13:53:03.885225 kernel: Speculative Store Bypass: Vulnerable Jan 30 13:53:03.885237 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:53:03.885245 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:53:03.885254 kernel: GDS: Unknown: Dependent on hypervisor status Jan 30 13:53:03.885262 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:53:03.885271 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:53:03.885280 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:53:03.885291 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 30 13:53:03.885299 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 30 13:53:03.885308 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 30 13:53:03.885317 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 30 13:53:03.885325 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 30 13:53:03.885334 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 30 13:53:03.885342 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:53:03.885351 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 30 13:53:03.885360 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 30 13:53:03.885368 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 30 13:53:03.885377 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 30 13:53:03.885388 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 30 13:53:03.885396 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 30 13:53:03.885405 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 30 13:53:03.885414 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:53:03.885422 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:53:03.885431 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:53:03.885439 kernel: landlock: Up and running. Jan 30 13:53:03.885448 kernel: SELinux: Initializing. Jan 30 13:53:03.885457 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 13:53:03.885465 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 13:53:03.885474 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 30 13:53:03.885485 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:53:03.885494 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:53:03.885503 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:53:03.885512 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 30 13:53:03.885521 kernel: signal: max sigframe size: 3632 Jan 30 13:53:03.885530 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:53:03.885538 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:53:03.885547 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 13:53:03.885556 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:53:03.885567 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:53:03.885576 kernel: .... node #0, CPUs: #1 Jan 30 13:53:03.885585 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 30 13:53:03.885594 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 30 13:53:03.885603 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:53:03.885612 kernel: smpboot: Max logical packages: 1 Jan 30 13:53:03.885620 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jan 30 13:53:03.885629 kernel: devtmpfs: initialized Jan 30 13:53:03.885638 kernel: x86/mm: Memory block size: 128MB Jan 30 13:53:03.885649 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:53:03.885658 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 13:53:03.885667 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:53:03.885676 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:53:03.885685 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:53:03.885693 kernel: audit: type=2000 audit(1738245183.306:1): state=initialized audit_enabled=0 res=1 Jan 30 13:53:03.885702 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:53:03.885711 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:53:03.885719 kernel: cpuidle: using governor menu Jan 30 13:53:03.885730 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:53:03.885739 kernel: dca service started, version 1.12.1 Jan 30 13:53:03.885748 kernel: PCI: Using configuration type 1 for base access Jan 30 13:53:03.885757 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:53:03.885765 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:53:03.885774 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:53:03.885783 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:53:03.885792 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:53:03.885800 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:53:03.887846 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:53:03.887857 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:53:03.887866 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:53:03.887875 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 30 13:53:03.887884 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:53:03.887893 kernel: ACPI: Interpreter enabled Jan 30 13:53:03.887902 kernel: ACPI: PM: (supports S0 S5) Jan 30 13:53:03.887911 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:53:03.887920 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:53:03.887934 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:53:03.887943 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 30 13:53:03.887952 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:53:03.888114 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:53:03.888216 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 30 13:53:03.888305 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 30 13:53:03.888317 kernel: acpiphp: Slot [3] registered Jan 30 13:53:03.888330 kernel: acpiphp: Slot [4] registered Jan 30 13:53:03.888339 kernel: acpiphp: Slot [5] registered Jan 30 13:53:03.888347 kernel: acpiphp: Slot [6] registered Jan 30 13:53:03.888356 kernel: acpiphp: Slot [7] registered Jan 30 13:53:03.888365 kernel: acpiphp: Slot [8] registered Jan 30 13:53:03.888373 kernel: acpiphp: Slot [9] registered Jan 30 13:53:03.888382 kernel: acpiphp: Slot [10] registered Jan 30 13:53:03.888391 kernel: acpiphp: Slot [11] registered Jan 30 13:53:03.888401 kernel: acpiphp: Slot [12] registered Jan 30 13:53:03.888412 kernel: acpiphp: Slot [13] registered Jan 30 13:53:03.888421 kernel: acpiphp: Slot [14] registered Jan 30 13:53:03.888429 kernel: acpiphp: Slot [15] registered Jan 30 13:53:03.888438 kernel: acpiphp: Slot [16] registered Jan 30 13:53:03.888447 kernel: acpiphp: Slot [17] registered Jan 30 13:53:03.888456 kernel: acpiphp: Slot [18] registered Jan 30 13:53:03.888465 kernel: acpiphp: Slot [19] registered Jan 30 13:53:03.888473 kernel: acpiphp: Slot [20] registered Jan 30 13:53:03.888482 kernel: acpiphp: Slot [21] registered Jan 30 13:53:03.888491 kernel: acpiphp: Slot [22] registered Jan 30 13:53:03.888502 kernel: acpiphp: Slot [23] registered Jan 30 13:53:03.888511 kernel: acpiphp: Slot [24] registered Jan 30 13:53:03.888519 kernel: acpiphp: Slot [25] registered Jan 30 13:53:03.888528 kernel: acpiphp: Slot [26] registered Jan 30 13:53:03.888537 kernel: acpiphp: Slot [27] registered Jan 30 13:53:03.888546 kernel: acpiphp: Slot [28] registered Jan 30 13:53:03.888554 kernel: acpiphp: Slot [29] registered Jan 30 13:53:03.888563 kernel: acpiphp: Slot [30] registered Jan 30 13:53:03.888572 kernel: acpiphp: Slot [31] registered Jan 30 13:53:03.888583 kernel: PCI host bridge to bus 0000:00 Jan 30 13:53:03.888682 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:53:03.888761 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:53:03.888866 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:53:03.888943 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 30 13:53:03.889018 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:53:03.889115 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 30 13:53:03.889215 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 30 13:53:03.889306 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 30 13:53:03.889393 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 30 13:53:03.889479 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jan 30 13:53:03.889564 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 30 13:53:03.889649 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 30 13:53:03.889734 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 30 13:53:03.891872 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 30 13:53:03.891971 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 30 13:53:03.892060 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 30 13:53:03.892157 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 30 13:53:03.892246 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jan 30 13:53:03.892332 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 30 13:53:03.892437 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:53:03.892538 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 30 13:53:03.892627 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jan 30 13:53:03.892720 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 30 13:53:03.892818 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jan 30 13:53:03.894834 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 13:53:03.894847 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 13:53:03.894856 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:53:03.894869 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 13:53:03.894878 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 30 13:53:03.894887 kernel: iommu: Default domain type: Translated Jan 30 13:53:03.894896 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:53:03.894905 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:53:03.894914 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:53:03.894923 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 13:53:03.894932 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jan 30 13:53:03.895049 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 30 13:53:03.895139 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 30 13:53:03.895230 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:53:03.895242 kernel: vgaarb: loaded Jan 30 13:53:03.895251 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 30 13:53:03.895260 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 30 13:53:03.895269 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 13:53:03.895278 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:53:03.895287 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:53:03.895299 kernel: pnp: PnP ACPI init Jan 30 13:53:03.895308 kernel: pnp: PnP ACPI: found 5 devices Jan 30 13:53:03.895317 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:53:03.895326 kernel: NET: Registered PF_INET protocol family Jan 30 13:53:03.895335 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:53:03.895344 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 30 13:53:03.895353 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:53:03.895362 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:53:03.895371 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 13:53:03.895382 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 30 13:53:03.895391 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 13:53:03.895400 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 13:53:03.895409 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:53:03.895418 kernel: NET: Registered PF_XDP protocol family Jan 30 13:53:03.895502 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:53:03.895583 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:53:03.895663 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:53:03.895747 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 30 13:53:03.895862 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 30 13:53:03.895875 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:53:03.895884 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 13:53:03.895893 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 30 13:53:03.895903 kernel: clocksource: Switched to clocksource tsc Jan 30 13:53:03.895912 kernel: Initialise system trusted keyrings Jan 30 13:53:03.895920 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 30 13:53:03.895933 kernel: Key type asymmetric registered Jan 30 13:53:03.895941 kernel: Asymmetric key parser 'x509' registered Jan 30 13:53:03.895950 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:53:03.895959 kernel: io scheduler mq-deadline registered Jan 30 13:53:03.895968 kernel: io scheduler kyber registered Jan 30 13:53:03.895977 kernel: io scheduler bfq registered Jan 30 13:53:03.895986 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:53:03.895994 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:53:03.896003 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:53:03.896015 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 13:53:03.896024 kernel: i8042: Warning: Keylock active Jan 30 13:53:03.896033 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:53:03.896041 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:53:03.896140 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 30 13:53:03.896226 kernel: rtc_cmos 00:00: registered as rtc0 Jan 30 13:53:03.896310 kernel: rtc_cmos 00:00: setting system clock to 2025-01-30T13:53:03 UTC (1738245183) Jan 30 13:53:03.896393 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 30 13:53:03.896407 kernel: intel_pstate: CPU model not supported Jan 30 13:53:03.896416 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:53:03.896425 kernel: Segment Routing with IPv6 Jan 30 13:53:03.896434 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:53:03.896443 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:53:03.896452 kernel: Key type dns_resolver registered Jan 30 13:53:03.896460 kernel: IPI shorthand broadcast: enabled Jan 30 13:53:03.896469 kernel: sched_clock: Marking stable (458001767, 248104302)->(786094496, -79988427) Jan 30 13:53:03.896478 kernel: registered taskstats version 1 Jan 30 13:53:03.896490 kernel: Loading compiled-in X.509 certificates Jan 30 13:53:03.896499 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:53:03.896508 kernel: Key type .fscrypt registered Jan 30 13:53:03.896516 kernel: Key type fscrypt-provisioning registered Jan 30 13:53:03.896525 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:53:03.896534 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:53:03.896543 kernel: ima: No architecture policies found Jan 30 13:53:03.896551 kernel: clk: Disabling unused clocks Jan 30 13:53:03.896560 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:53:03.896572 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:53:03.896580 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:53:03.896589 kernel: Run /init as init process Jan 30 13:53:03.896598 kernel: with arguments: Jan 30 13:53:03.896607 kernel: /init Jan 30 13:53:03.896615 kernel: with environment: Jan 30 13:53:03.896624 kernel: HOME=/ Jan 30 13:53:03.896633 kernel: TERM=linux Jan 30 13:53:03.896641 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:53:03.896658 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:53:03.896679 systemd[1]: Detected virtualization amazon. Jan 30 13:53:03.896692 systemd[1]: Detected architecture x86-64. Jan 30 13:53:03.896701 systemd[1]: Running in initrd. Jan 30 13:53:03.896711 systemd[1]: No hostname configured, using default hostname. Jan 30 13:53:03.896723 systemd[1]: Hostname set to . Jan 30 13:53:03.896733 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:53:03.896742 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:53:03.896752 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:53:03.896762 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:53:03.896772 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:53:03.896782 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:53:03.896792 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:53:03.898830 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:53:03.898846 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:53:03.898857 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:53:03.898867 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:53:03.898877 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:53:03.898887 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:53:03.898901 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:53:03.898911 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:53:03.898920 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:53:03.898930 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:53:03.898940 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:53:03.898950 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:53:03.898960 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:53:03.898970 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:53:03.898981 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:53:03.898993 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:53:03.899003 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:53:03.899013 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:53:03.899023 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:53:03.899033 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:53:03.899043 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:53:03.899053 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:53:03.899066 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:53:03.899076 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:53:03.899086 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:53:03.899117 systemd-journald[178]: Collecting audit messages is disabled. Jan 30 13:53:03.899142 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:53:03.899152 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:53:03.899164 systemd-journald[178]: Journal started Jan 30 13:53:03.899187 systemd-journald[178]: Runtime Journal (/run/log/journal/ec23e6132b08bc3404bb1b978f07c0d7) is 4.8M, max 38.6M, 33.7M free. Jan 30 13:53:03.907830 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:53:03.907291 systemd-modules-load[179]: Inserted module 'overlay' Jan 30 13:53:04.046716 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:53:04.046759 kernel: Bridge firewalling registered Jan 30 13:53:04.046781 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 13:53:04.046800 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:53:03.947068 systemd-modules-load[179]: Inserted module 'br_netfilter' Jan 30 13:53:04.048458 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:53:04.058089 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:53:04.070180 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:53:04.084029 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:53:04.090025 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:53:04.090348 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:53:04.100005 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:53:04.100960 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:53:04.136246 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:53:04.140427 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:53:04.145715 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:53:04.170128 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:53:04.178900 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:53:04.187127 dracut-cmdline[212]: dracut-dracut-053 Jan 30 13:53:04.191470 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:53:04.238974 systemd-resolved[213]: Positive Trust Anchors: Jan 30 13:53:04.239383 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:53:04.242156 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:53:04.254708 systemd-resolved[213]: Defaulting to hostname 'linux'. Jan 30 13:53:04.257382 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:53:04.259011 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:53:04.283838 kernel: SCSI subsystem initialized Jan 30 13:53:04.293838 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:53:04.308832 kernel: iscsi: registered transport (tcp) Jan 30 13:53:04.332839 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:53:04.332908 kernel: QLogic iSCSI HBA Driver Jan 30 13:53:04.373101 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:53:04.384522 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:53:04.410085 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:53:04.410165 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:53:04.410186 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:53:04.451838 kernel: raid6: avx512x4 gen() 14570 MB/s Jan 30 13:53:04.468838 kernel: raid6: avx512x2 gen() 13160 MB/s Jan 30 13:53:04.485839 kernel: raid6: avx512x1 gen() 14296 MB/s Jan 30 13:53:04.502877 kernel: raid6: avx2x4 gen() 12641 MB/s Jan 30 13:53:04.519838 kernel: raid6: avx2x2 gen() 13746 MB/s Jan 30 13:53:04.536933 kernel: raid6: avx2x1 gen() 11109 MB/s Jan 30 13:53:04.537007 kernel: raid6: using algorithm avx512x4 gen() 14570 MB/s Jan 30 13:53:04.555539 kernel: raid6: .... xor() 7290 MB/s, rmw enabled Jan 30 13:53:04.555625 kernel: raid6: using avx512x2 recovery algorithm Jan 30 13:53:04.577834 kernel: xor: automatically using best checksumming function avx Jan 30 13:53:04.765838 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:53:04.776916 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:53:04.785009 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:53:04.800460 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jan 30 13:53:04.806309 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:53:04.822621 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:53:04.841160 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Jan 30 13:53:04.875306 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:53:04.883028 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:53:05.034802 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:53:05.042039 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:53:05.077331 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:53:05.081533 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:53:05.086788 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:53:05.090329 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:53:05.099582 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:53:05.140306 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:53:05.148904 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 30 13:53:05.183421 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 30 13:53:05.183689 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 30 13:53:05.183896 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:53:05.183921 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:8e:9d:3d:f2:fb Jan 30 13:53:05.172739 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:53:05.173024 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:53:05.174979 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:53:05.177114 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:53:05.177334 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:53:05.178822 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:53:05.187186 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:53:05.193150 (udev-worker)[445]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:53:05.225874 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 30 13:53:05.226233 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 30 13:53:05.229833 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:53:05.229889 kernel: AES CTR mode by8 optimization enabled Jan 30 13:53:05.240853 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 30 13:53:05.246863 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:53:05.246918 kernel: GPT:9289727 != 16777215 Jan 30 13:53:05.246937 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:53:05.246954 kernel: GPT:9289727 != 16777215 Jan 30 13:53:05.246971 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:53:05.246988 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 13:53:05.341841 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (442) Jan 30 13:53:05.372696 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 30 13:53:05.381296 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (441) Jan 30 13:53:05.395638 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:53:05.406027 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:53:05.456617 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 30 13:53:05.465836 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 30 13:53:05.469181 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:53:05.475656 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 30 13:53:05.475780 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 30 13:53:05.486127 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:53:05.504901 disk-uuid[628]: Primary Header is updated. Jan 30 13:53:05.504901 disk-uuid[628]: Secondary Entries is updated. Jan 30 13:53:05.504901 disk-uuid[628]: Secondary Header is updated. Jan 30 13:53:05.511835 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 13:53:05.517839 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 13:53:05.522844 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 13:53:06.525025 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 13:53:06.525470 disk-uuid[629]: The operation has completed successfully. Jan 30 13:53:06.678757 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:53:06.678890 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:53:06.697967 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:53:06.710673 sh[972]: Success Jan 30 13:53:06.754831 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 13:53:06.870123 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:53:06.880112 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:53:06.883254 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:53:06.927310 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:53:06.927376 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:53:06.927440 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:53:06.928909 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:53:06.928938 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:53:07.054834 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 13:53:07.068979 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:53:07.070150 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:53:07.076055 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:53:07.079979 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:53:07.106657 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:53:07.106873 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:53:07.106914 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 30 13:53:07.114793 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 30 13:53:07.125460 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:53:07.127463 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:53:07.134402 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:53:07.144036 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:53:07.192274 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:53:07.202124 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:53:07.229568 systemd-networkd[1164]: lo: Link UP Jan 30 13:53:07.229579 systemd-networkd[1164]: lo: Gained carrier Jan 30 13:53:07.231566 systemd-networkd[1164]: Enumeration completed Jan 30 13:53:07.231669 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:53:07.232077 systemd-networkd[1164]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:53:07.232081 systemd-networkd[1164]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:53:07.233859 systemd[1]: Reached target network.target - Network. Jan 30 13:53:07.242545 systemd-networkd[1164]: eth0: Link UP Jan 30 13:53:07.242551 systemd-networkd[1164]: eth0: Gained carrier Jan 30 13:53:07.242562 systemd-networkd[1164]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:53:07.263973 systemd-networkd[1164]: eth0: DHCPv4 address 172.31.27.35/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 30 13:53:07.484122 ignition[1096]: Ignition 2.19.0 Jan 30 13:53:07.484137 ignition[1096]: Stage: fetch-offline Jan 30 13:53:07.484403 ignition[1096]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:53:07.484416 ignition[1096]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:53:07.484832 ignition[1096]: Ignition finished successfully Jan 30 13:53:07.490094 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:53:07.504098 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 13:53:07.522587 ignition[1173]: Ignition 2.19.0 Jan 30 13:53:07.522602 ignition[1173]: Stage: fetch Jan 30 13:53:07.523072 ignition[1173]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:53:07.523087 ignition[1173]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:53:07.523194 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:53:07.556031 ignition[1173]: PUT result: OK Jan 30 13:53:07.559702 ignition[1173]: parsed url from cmdline: "" Jan 30 13:53:07.559714 ignition[1173]: no config URL provided Jan 30 13:53:07.559726 ignition[1173]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:53:07.559740 ignition[1173]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:53:07.559762 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:53:07.561354 ignition[1173]: PUT result: OK Jan 30 13:53:07.561425 ignition[1173]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 30 13:53:07.564869 ignition[1173]: GET result: OK Jan 30 13:53:07.564924 ignition[1173]: parsing config with SHA512: b4af8df80e377a2cc084172338cfec2ffe5000d60e9571b6e573f0864be62466cd646e1c628419d398027310711cb4a3b4975b9c8af70fc9966b9b8fa04c499a Jan 30 13:53:07.568845 unknown[1173]: fetched base config from "system" Jan 30 13:53:07.568854 unknown[1173]: fetched base config from "system" Jan 30 13:53:07.569253 ignition[1173]: fetch: fetch complete Jan 30 13:53:07.568862 unknown[1173]: fetched user config from "aws" Jan 30 13:53:07.569259 ignition[1173]: fetch: fetch passed Jan 30 13:53:07.572582 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 13:53:07.569319 ignition[1173]: Ignition finished successfully Jan 30 13:53:07.579993 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:53:07.598076 ignition[1180]: Ignition 2.19.0 Jan 30 13:53:07.598093 ignition[1180]: Stage: kargs Jan 30 13:53:07.598704 ignition[1180]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:53:07.598717 ignition[1180]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:53:07.598852 ignition[1180]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:53:07.600407 ignition[1180]: PUT result: OK Jan 30 13:53:07.606441 ignition[1180]: kargs: kargs passed Jan 30 13:53:07.606519 ignition[1180]: Ignition finished successfully Jan 30 13:53:07.608488 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:53:07.613130 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:53:07.631487 ignition[1187]: Ignition 2.19.0 Jan 30 13:53:07.631503 ignition[1187]: Stage: disks Jan 30 13:53:07.632007 ignition[1187]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:53:07.632017 ignition[1187]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:53:07.632193 ignition[1187]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:53:07.634515 ignition[1187]: PUT result: OK Jan 30 13:53:07.641759 ignition[1187]: disks: disks passed Jan 30 13:53:07.641852 ignition[1187]: Ignition finished successfully Jan 30 13:53:07.643818 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:53:07.644462 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:53:07.648815 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:53:07.648962 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:53:07.653951 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:53:07.656359 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:53:07.662989 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:53:07.715674 systemd-fsck[1195]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:53:07.721640 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:53:07.735960 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:53:07.881941 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:53:07.882679 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:53:07.883355 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:53:07.899192 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:53:07.905548 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:53:07.908262 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:53:07.908317 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:53:07.908342 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:53:07.921929 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:53:07.927833 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1214) Jan 30 13:53:07.929674 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:53:07.930650 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:53:07.931906 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:53:07.931920 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 30 13:53:07.942822 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 30 13:53:07.943769 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:53:08.343156 initrd-setup-root[1238]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:53:08.366729 initrd-setup-root[1245]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:53:08.373288 initrd-setup-root[1252]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:53:08.392550 initrd-setup-root[1259]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:53:08.740735 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:53:08.750066 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:53:08.759883 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:53:08.778159 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:53:08.777719 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:53:08.838558 ignition[1326]: INFO : Ignition 2.19.0 Jan 30 13:53:08.838558 ignition[1326]: INFO : Stage: mount Jan 30 13:53:08.841030 ignition[1326]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:53:08.841030 ignition[1326]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:53:08.841030 ignition[1326]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:53:08.841030 ignition[1326]: INFO : PUT result: OK Jan 30 13:53:08.847360 ignition[1326]: INFO : mount: mount passed Jan 30 13:53:08.848497 ignition[1326]: INFO : Ignition finished successfully Jan 30 13:53:08.849859 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:53:08.852532 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:53:08.860951 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:53:08.878948 systemd-networkd[1164]: eth0: Gained IPv6LL Jan 30 13:53:08.880129 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:53:08.899836 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1339) Jan 30 13:53:08.901846 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:53:08.901904 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:53:08.903239 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 30 13:53:08.907828 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 30 13:53:08.910178 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:53:08.938506 ignition[1356]: INFO : Ignition 2.19.0 Jan 30 13:53:08.940039 ignition[1356]: INFO : Stage: files Jan 30 13:53:08.940039 ignition[1356]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:53:08.940039 ignition[1356]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:53:08.940039 ignition[1356]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:53:08.945164 ignition[1356]: INFO : PUT result: OK Jan 30 13:53:08.948320 ignition[1356]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:53:08.971139 ignition[1356]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:53:08.971139 ignition[1356]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:53:09.030990 ignition[1356]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:53:09.033392 ignition[1356]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:53:09.033392 ignition[1356]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:53:09.032438 unknown[1356]: wrote ssh authorized keys file for user: core Jan 30 13:53:09.039859 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:53:09.039859 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:53:09.039859 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:53:09.039859 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:53:09.039859 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:53:09.039859 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:53:09.039859 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:53:09.039859 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 30 13:53:09.513129 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 30 13:53:09.811157 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:53:09.814947 ignition[1356]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:53:09.819195 ignition[1356]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:53:09.819195 ignition[1356]: INFO : files: files passed Jan 30 13:53:09.819195 ignition[1356]: INFO : Ignition finished successfully Jan 30 13:53:09.822486 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:53:09.835151 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:53:09.839649 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:53:09.845602 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:53:09.845716 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:53:09.872938 initrd-setup-root-after-ignition[1384]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:53:09.872938 initrd-setup-root-after-ignition[1384]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:53:09.878291 initrd-setup-root-after-ignition[1388]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:53:09.881818 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:53:09.885070 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:53:09.898012 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:53:09.934443 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:53:09.934576 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:53:09.937857 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:53:09.941626 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:53:09.944365 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:53:09.951113 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:53:09.968503 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:53:09.975277 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:53:09.988532 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:53:09.988752 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:53:09.994323 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:53:09.997275 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:53:09.998617 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:53:10.001716 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:53:10.003910 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:53:10.006046 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:53:10.014150 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:53:10.017367 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:53:10.025158 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:53:10.027577 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:53:10.029091 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:53:10.031552 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:53:10.034760 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:53:10.036562 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:53:10.037830 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:53:10.040214 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:53:10.042527 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:53:10.044923 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:53:10.046032 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:53:10.048649 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:53:10.049798 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:53:10.052631 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:53:10.054180 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:53:10.057158 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:53:10.058238 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:53:10.065045 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:53:10.070043 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:53:10.072554 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:53:10.073084 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:53:10.076758 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:53:10.076981 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:53:10.085475 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:53:10.085604 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:53:10.102836 ignition[1408]: INFO : Ignition 2.19.0 Jan 30 13:53:10.102836 ignition[1408]: INFO : Stage: umount Jan 30 13:53:10.102836 ignition[1408]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:53:10.102836 ignition[1408]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:53:10.102836 ignition[1408]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:53:10.110943 ignition[1408]: INFO : PUT result: OK Jan 30 13:53:10.110233 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:53:10.114925 ignition[1408]: INFO : umount: umount passed Jan 30 13:53:10.114925 ignition[1408]: INFO : Ignition finished successfully Jan 30 13:53:10.118106 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:53:10.118256 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:53:10.120572 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:53:10.120628 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:53:10.124518 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:53:10.124584 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:53:10.127871 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 13:53:10.127929 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 13:53:10.129118 systemd[1]: Stopped target network.target - Network. Jan 30 13:53:10.131056 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:53:10.132072 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:53:10.134264 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:53:10.136469 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:53:10.138676 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:53:10.143707 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:53:10.154983 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:53:10.156398 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:53:10.156459 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:53:10.161410 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:53:10.161461 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:53:10.163753 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:53:10.163838 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:53:10.171875 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:53:10.171951 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:53:10.174486 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:53:10.175926 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:53:10.183871 systemd-networkd[1164]: eth0: DHCPv6 lease lost Jan 30 13:53:10.185385 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:53:10.185552 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:53:10.191005 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:53:10.191173 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:53:10.195801 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:53:10.195890 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:53:10.203998 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:53:10.206231 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:53:10.206307 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:53:10.208159 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:53:10.208562 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:53:10.209777 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:53:10.209843 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:53:10.218899 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:53:10.218972 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:53:10.223964 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:53:10.239745 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:53:10.239914 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:53:10.243658 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:53:10.243891 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:53:10.249593 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:53:10.249707 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:53:10.252934 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:53:10.252982 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:53:10.255079 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:53:10.255127 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:53:10.261500 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:53:10.261575 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:53:10.263866 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:53:10.263927 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:53:10.272068 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:53:10.273464 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:53:10.273527 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:53:10.275736 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 13:53:10.275786 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:53:10.277499 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:53:10.277659 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:53:10.279229 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:53:10.280749 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:53:10.286548 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:53:10.288485 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:53:10.298276 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:53:10.298375 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:53:10.300546 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:53:10.301867 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:53:10.301917 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:53:10.314062 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:53:10.324640 systemd[1]: Switching root. Jan 30 13:53:10.385474 systemd-journald[178]: Journal stopped Jan 30 13:53:12.559762 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Jan 30 13:53:12.562478 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:53:12.562509 kernel: SELinux: policy capability open_perms=1 Jan 30 13:53:12.562527 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:53:12.562562 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:53:12.562587 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:53:12.562610 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:53:12.562629 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:53:12.562648 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:53:12.562667 kernel: audit: type=1403 audit(1738245190.827:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:53:12.562688 systemd[1]: Successfully loaded SELinux policy in 66.079ms. Jan 30 13:53:12.562718 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.372ms. Jan 30 13:53:12.562740 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:53:12.562765 systemd[1]: Detected virtualization amazon. Jan 30 13:53:12.562788 systemd[1]: Detected architecture x86-64. Jan 30 13:53:12.564923 systemd[1]: Detected first boot. Jan 30 13:53:12.564964 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:53:12.564999 zram_generator::config[1450]: No configuration found. Jan 30 13:53:12.565029 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:53:12.565054 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:53:12.565078 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:53:12.565101 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:53:12.565126 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:53:12.565151 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:53:12.565178 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:53:12.565203 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:53:12.565231 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:53:12.565259 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:53:12.565285 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:53:12.565310 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:53:12.565335 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:53:12.565362 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:53:12.565388 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:53:12.565411 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:53:12.565436 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:53:12.565466 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:53:12.565491 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:53:12.565517 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:53:12.565541 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:53:12.565566 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:53:12.565590 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:53:12.565616 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:53:12.565646 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:53:12.565669 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:53:12.565694 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:53:12.565718 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:53:12.565742 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:53:12.565766 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:53:12.565791 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:53:12.565838 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:53:12.565862 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:53:12.565886 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:53:12.565914 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:53:12.565936 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:53:12.565961 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:53:12.565986 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:53:12.566011 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:53:12.566033 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:53:12.566057 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:53:12.566082 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:53:12.566109 systemd[1]: Reached target machines.target - Containers. Jan 30 13:53:12.566132 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:53:12.566158 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:53:12.566183 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:53:12.566213 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:53:12.566254 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:53:12.566277 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:53:12.566302 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:53:12.566326 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:53:12.566355 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:53:12.566381 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:53:12.566406 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:53:12.566430 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:53:12.566454 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:53:12.566479 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:53:12.566503 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:53:12.566528 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:53:12.566554 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:53:12.566582 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:53:12.566608 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:53:12.566634 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:53:12.566657 systemd[1]: Stopped verity-setup.service. Jan 30 13:53:12.566682 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:53:12.566708 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:53:12.566734 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:53:12.566758 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:53:12.566787 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:53:12.566830 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:53:12.566852 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:53:12.566872 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:53:12.566893 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:53:12.566920 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:53:12.566940 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:53:12.566960 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:53:12.566981 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:53:12.567001 kernel: loop: module loaded Jan 30 13:53:12.567023 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:53:12.567043 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:53:12.567064 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:53:12.567090 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:53:12.567111 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:53:12.567131 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:53:12.567151 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:53:12.567173 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:53:12.567231 systemd-journald[1543]: Collecting audit messages is disabled. Jan 30 13:53:12.567277 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:53:12.567300 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:53:12.567322 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:53:12.567341 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:53:12.567362 systemd-journald[1543]: Journal started Jan 30 13:53:12.567400 systemd-journald[1543]: Runtime Journal (/run/log/journal/ec23e6132b08bc3404bb1b978f07c0d7) is 4.8M, max 38.6M, 33.7M free. Jan 30 13:53:12.570394 kernel: fuse: init (API version 7.39) Jan 30 13:53:12.570439 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:53:12.041415 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:53:12.078634 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 30 13:53:12.079238 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:53:12.617707 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:53:12.617820 kernel: ACPI: bus type drm_connector registered Jan 30 13:53:12.617853 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:53:12.625790 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:53:12.625873 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:53:12.634824 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:53:12.636830 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:53:12.647165 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:53:12.661740 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:53:12.663383 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:53:12.668829 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:53:12.671565 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:53:12.671944 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:53:12.677159 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:53:12.677350 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:53:12.679215 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:53:12.681363 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:53:12.683232 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:53:12.685903 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:53:12.722739 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:53:12.747437 kernel: loop0: detected capacity change from 0 to 142488 Jan 30 13:53:12.741379 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:53:12.752192 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:53:12.760080 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:53:12.768421 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:53:12.772241 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:53:12.775731 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:53:12.793627 systemd-tmpfiles[1558]: ACLs are not supported, ignoring. Jan 30 13:53:12.793657 systemd-tmpfiles[1558]: ACLs are not supported, ignoring. Jan 30 13:53:12.798553 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:53:12.800316 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:53:12.806871 systemd-journald[1543]: Time spent on flushing to /var/log/journal/ec23e6132b08bc3404bb1b978f07c0d7 is 69.713ms for 956 entries. Jan 30 13:53:12.806871 systemd-journald[1543]: System Journal (/var/log/journal/ec23e6132b08bc3404bb1b978f07c0d7) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:53:12.890157 systemd-journald[1543]: Received client request to flush runtime journal. Jan 30 13:53:12.890239 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:53:12.811562 udevadm[1587]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 13:53:12.820934 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:53:12.831981 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:53:12.892276 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:53:12.910891 kernel: loop1: detected capacity change from 0 to 205544 Jan 30 13:53:12.926177 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:53:12.933501 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:53:12.962787 systemd-tmpfiles[1599]: ACLs are not supported, ignoring. Jan 30 13:53:12.963175 systemd-tmpfiles[1599]: ACLs are not supported, ignoring. Jan 30 13:53:12.971200 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:53:13.042103 kernel: loop2: detected capacity change from 0 to 61336 Jan 30 13:53:13.137841 kernel: loop3: detected capacity change from 0 to 140768 Jan 30 13:53:13.244951 kernel: loop4: detected capacity change from 0 to 142488 Jan 30 13:53:13.266838 kernel: loop5: detected capacity change from 0 to 205544 Jan 30 13:53:13.310836 kernel: loop6: detected capacity change from 0 to 61336 Jan 30 13:53:13.328835 kernel: loop7: detected capacity change from 0 to 140768 Jan 30 13:53:13.361381 (sd-merge)[1605]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 30 13:53:13.363270 (sd-merge)[1605]: Merged extensions into '/usr'. Jan 30 13:53:13.376630 systemd[1]: Reloading requested from client PID 1557 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:53:13.376769 systemd[1]: Reloading... Jan 30 13:53:13.506849 zram_generator::config[1637]: No configuration found. Jan 30 13:53:13.698067 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:53:13.782465 systemd[1]: Reloading finished in 404 ms. Jan 30 13:53:13.812005 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:53:13.814101 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:53:13.826023 systemd[1]: Starting ensure-sysext.service... Jan 30 13:53:13.828688 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:53:13.836051 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:53:13.848124 systemd[1]: Reloading requested from client PID 1681 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:53:13.848146 systemd[1]: Reloading... Jan 30 13:53:13.883696 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:53:13.884253 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:53:13.887892 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:53:13.888306 systemd-tmpfiles[1682]: ACLs are not supported, ignoring. Jan 30 13:53:13.888398 systemd-tmpfiles[1682]: ACLs are not supported, ignoring. Jan 30 13:53:13.898172 systemd-tmpfiles[1682]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:53:13.898199 systemd-tmpfiles[1682]: Skipping /boot Jan 30 13:53:13.906941 systemd-udevd[1683]: Using default interface naming scheme 'v255'. Jan 30 13:53:13.925175 systemd-tmpfiles[1682]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:53:13.925320 systemd-tmpfiles[1682]: Skipping /boot Jan 30 13:53:13.990898 zram_generator::config[1709]: No configuration found. Jan 30 13:53:14.189951 (udev-worker)[1760]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:53:14.194101 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:53:14.324023 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jan 30 13:53:14.356995 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 30 13:53:14.346567 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 13:53:14.349852 systemd[1]: Reloading finished in 501 ms. Jan 30 13:53:14.363839 kernel: ACPI: button: Power Button [PWRF] Jan 30 13:53:14.367526 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Jan 30 13:53:14.379840 kernel: ACPI: button: Sleep Button [SLPF] Jan 30 13:53:14.383832 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Jan 30 13:53:14.385724 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:53:14.388257 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:53:14.409464 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1768) Jan 30 13:53:14.432347 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:53:14.438145 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:53:14.449288 ldconfig[1553]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:53:14.445141 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:53:14.453187 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:53:14.473924 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:53:14.486340 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:53:14.491314 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:53:14.502253 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:53:14.505058 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:53:14.531802 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:53:14.532141 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:53:14.541222 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:53:14.550263 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:53:14.554228 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:53:14.555663 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:53:14.559927 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:53:14.561188 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:53:14.567020 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:53:14.567537 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:53:14.567762 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:53:14.568739 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:53:14.580234 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:53:14.580612 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:53:14.598230 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:53:14.599688 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:53:14.599985 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:53:14.601382 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:53:14.611612 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:53:14.614073 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:53:14.614691 systemd[1]: Finished ensure-sysext.service. Jan 30 13:53:14.638254 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:53:14.643216 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:53:14.643408 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:53:14.661862 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:53:14.668008 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:53:14.668227 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:53:14.681395 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:53:14.681605 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:53:14.682513 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:53:14.698305 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:53:14.698839 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:53:14.699018 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:53:14.702366 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:53:14.709289 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:53:14.742982 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 30 13:53:14.752033 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:53:14.791215 augenrules[1913]: No rules Jan 30 13:53:14.795881 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:53:14.805167 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:53:14.812324 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:53:14.824499 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:53:14.834974 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:53:14.866115 lvm[1924]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:53:14.899878 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:53:14.917402 systemd-networkd[1801]: lo: Link UP Jan 30 13:53:14.917417 systemd-networkd[1801]: lo: Gained carrier Jan 30 13:53:14.923161 systemd-networkd[1801]: Enumeration completed Jan 30 13:53:14.923595 systemd-networkd[1801]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:53:14.923600 systemd-networkd[1801]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:53:14.929386 systemd-resolved[1813]: Positive Trust Anchors: Jan 30 13:53:14.929871 systemd-networkd[1801]: eth0: Link UP Jan 30 13:53:14.930156 systemd-networkd[1801]: eth0: Gained carrier Jan 30 13:53:14.930253 systemd-networkd[1801]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:53:14.930481 systemd-resolved[1813]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:53:14.930611 systemd-resolved[1813]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:53:14.941869 systemd-networkd[1801]: eth0: DHCPv4 address 172.31.27.35/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 30 13:53:14.949521 systemd-resolved[1813]: Defaulting to hostname 'linux'. Jan 30 13:53:15.019700 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:53:15.020381 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:53:15.020639 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:53:15.021015 systemd[1]: Reached target network.target - Network. Jan 30 13:53:15.021185 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:53:15.033127 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:53:15.037988 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:53:15.039462 lvm[1928]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:53:15.049070 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:53:15.054931 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:53:15.060802 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:53:15.062372 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:53:15.063964 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:53:15.065304 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:53:15.066770 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:53:15.068092 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:53:15.068122 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:53:15.069196 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:53:15.071381 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:53:15.074036 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:53:15.085896 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:53:15.088338 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:53:15.090135 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:53:15.092606 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:53:15.093877 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:53:15.095228 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:53:15.095271 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:53:15.099937 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:53:15.105147 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 13:53:15.111451 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:53:15.115925 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:53:15.126035 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:53:15.127617 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:53:15.129773 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:53:15.141208 systemd[1]: Started ntpd.service - Network Time Service. Jan 30 13:53:15.148629 jq[1937]: false Jan 30 13:53:15.154523 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 30 13:53:15.158002 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:53:15.165491 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:53:15.193311 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:53:15.195320 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:53:15.197073 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:53:15.203034 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:53:15.213964 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:53:15.228352 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:53:15.228606 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:53:15.249753 jq[1949]: true Jan 30 13:53:15.249789 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:53:15.250121 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:53:15.281513 jq[1959]: true Jan 30 13:53:15.327346 (ntainerd)[1963]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:53:15.338368 extend-filesystems[1938]: Found loop4 Jan 30 13:53:15.338368 extend-filesystems[1938]: Found loop5 Jan 30 13:53:15.336336 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:53:15.346019 ntpd[1940]: 30 Jan 13:53:15 ntpd[1940]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:52 UTC 2025 (1): Starting Jan 30 13:53:15.346019 ntpd[1940]: 30 Jan 13:53:15 ntpd[1940]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 30 13:53:15.346019 ntpd[1940]: 30 Jan 13:53:15 ntpd[1940]: ---------------------------------------------------- Jan 30 13:53:15.346019 ntpd[1940]: 30 Jan 13:53:15 ntpd[1940]: ntp-4 is maintained by Network Time Foundation, Jan 30 13:53:15.346019 ntpd[1940]: 30 Jan 13:53:15 ntpd[1940]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 30 13:53:15.346019 ntpd[1940]: 30 Jan 13:53:15 ntpd[1940]: corporation. Support and training for ntp-4 are Jan 30 13:53:15.346019 ntpd[1940]: 30 Jan 13:53:15 ntpd[1940]: available at https://www.nwtime.org/support Jan 30 13:53:15.346019 ntpd[1940]: 30 Jan 13:53:15 ntpd[1940]: ---------------------------------------------------- Jan 30 13:53:15.342120 ntpd[1940]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:52 UTC 2025 (1): Starting Jan 30 13:53:15.337886 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:53:15.342144 ntpd[1940]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 30 13:53:15.344280 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:53:15.342155 ntpd[1940]: ---------------------------------------------------- Jan 30 13:53:15.342164 ntpd[1940]: ntp-4 is maintained by Network Time Foundation, Jan 30 13:53:15.342185 ntpd[1940]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 30 13:53:15.342195 ntpd[1940]: corporation. Support and training for ntp-4 are Jan 30 13:53:15.353302 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:53:15.356114 extend-filesystems[1938]: Found loop6 Jan 30 13:53:15.356114 extend-filesystems[1938]: Found loop7 Jan 30 13:53:15.356114 extend-filesystems[1938]: Found nvme0n1 Jan 30 13:53:15.356114 extend-filesystems[1938]: Found nvme0n1p1 Jan 30 13:53:15.356114 extend-filesystems[1938]: Found nvme0n1p2 Jan 30 13:53:15.356114 extend-filesystems[1938]: Found nvme0n1p3 Jan 30 13:53:15.342205 ntpd[1940]: available at https://www.nwtime.org/support Jan 30 13:53:15.353346 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:53:15.369383 ntpd[1940]: 30 Jan 13:53:15 ntpd[1940]: proto: precision = 0.062 usec (-24) Jan 30 13:53:15.369383 ntpd[1940]: 30 Jan 13:53:15 ntpd[1940]: basedate set to 2025-01-17 Jan 30 13:53:15.369383 ntpd[1940]: 30 Jan 13:53:15 ntpd[1940]: gps base set to 2025-01-19 (week 2350) Jan 30 13:53:15.374000 extend-filesystems[1938]: Found usr Jan 30 13:53:15.374000 extend-filesystems[1938]: Found nvme0n1p4 Jan 30 13:53:15.374000 extend-filesystems[1938]: Found nvme0n1p6 Jan 30 13:53:15.374000 extend-filesystems[1938]: Found nvme0n1p7 Jan 30 13:53:15.374000 extend-filesystems[1938]: Found nvme0n1p9 Jan 30 13:53:15.374000 extend-filesystems[1938]: Checking size of /dev/nvme0n1p9 Jan 30 13:53:15.342216 ntpd[1940]: ---------------------------------------------------- Jan 30 13:53:15.355077 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:53:15.383134 ntpd[1940]: 30 Jan 13:53:15 ntpd[1940]: Listen and drop on 0 v6wildcard [::]:123 Jan 30 13:53:15.383134 ntpd[1940]: 30 Jan 13:53:15 ntpd[1940]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 30 13:53:15.383134 ntpd[1940]: 30 Jan 13:53:15 ntpd[1940]: Listen normally on 2 lo 127.0.0.1:123 Jan 30 13:53:15.383134 ntpd[1940]: 30 Jan 13:53:15 ntpd[1940]: Listen normally on 3 eth0 172.31.27.35:123 Jan 30 13:53:15.383134 ntpd[1940]: 30 Jan 13:53:15 ntpd[1940]: Listen normally on 4 lo [::1]:123 Jan 30 13:53:15.383134 ntpd[1940]: 30 Jan 13:53:15 ntpd[1940]: bind(21) AF_INET6 fe80::48e:9dff:fe3d:f2fb%2#123 flags 0x11 failed: Cannot assign requested address Jan 30 13:53:15.383134 ntpd[1940]: 30 Jan 13:53:15 ntpd[1940]: unable to create socket on eth0 (5) for fe80::48e:9dff:fe3d:f2fb%2#123 Jan 30 13:53:15.383134 ntpd[1940]: 30 Jan 13:53:15 ntpd[1940]: failed to init interface for address fe80::48e:9dff:fe3d:f2fb%2 Jan 30 13:53:15.383134 ntpd[1940]: 30 Jan 13:53:15 ntpd[1940]: Listening on routing socket on fd #21 for interface updates Jan 30 13:53:15.343689 dbus-daemon[1936]: [system] SELinux support is enabled Jan 30 13:53:15.355106 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:53:15.354089 ntpd[1940]: proto: precision = 0.062 usec (-24) Jan 30 13:53:15.357385 ntpd[1940]: basedate set to 2025-01-17 Jan 30 13:53:15.357404 ntpd[1940]: gps base set to 2025-01-19 (week 2350) Jan 30 13:53:15.373838 ntpd[1940]: Listen and drop on 0 v6wildcard [::]:123 Jan 30 13:53:15.373901 ntpd[1940]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 30 13:53:15.379706 dbus-daemon[1936]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1801 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 30 13:53:15.382287 ntpd[1940]: Listen normally on 2 lo 127.0.0.1:123 Jan 30 13:53:15.382337 ntpd[1940]: Listen normally on 3 eth0 172.31.27.35:123 Jan 30 13:53:15.382381 ntpd[1940]: Listen normally on 4 lo [::1]:123 Jan 30 13:53:15.382433 ntpd[1940]: bind(21) AF_INET6 fe80::48e:9dff:fe3d:f2fb%2#123 flags 0x11 failed: Cannot assign requested address Jan 30 13:53:15.382456 ntpd[1940]: unable to create socket on eth0 (5) for fe80::48e:9dff:fe3d:f2fb%2#123 Jan 30 13:53:15.382472 ntpd[1940]: failed to init interface for address fe80::48e:9dff:fe3d:f2fb%2 Jan 30 13:53:15.382508 ntpd[1940]: Listening on routing socket on fd #21 for interface updates Jan 30 13:53:15.391741 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 30 13:53:15.397960 update_engine[1946]: I20250130 13:53:15.394342 1946 main.cc:92] Flatcar Update Engine starting Jan 30 13:53:15.408983 ntpd[1940]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 13:53:15.410997 ntpd[1940]: 30 Jan 13:53:15 ntpd[1940]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 13:53:15.410997 ntpd[1940]: 30 Jan 13:53:15 ntpd[1940]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 13:53:15.410868 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 30 13:53:15.409849 ntpd[1940]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 13:53:15.416271 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:53:15.416554 update_engine[1946]: I20250130 13:53:15.416369 1946 update_check_scheduler.cc:74] Next update check in 5m53s Jan 30 13:53:15.419125 extend-filesystems[1938]: Resized partition /dev/nvme0n1p9 Jan 30 13:53:15.428683 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:53:15.431234 extend-filesystems[1999]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:53:15.448363 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 30 13:53:15.500455 coreos-metadata[1935]: Jan 30 13:53:15.498 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 30 13:53:15.570849 coreos-metadata[1935]: Jan 30 13:53:15.501 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 30 13:53:15.570849 coreos-metadata[1935]: Jan 30 13:53:15.502 INFO Fetch successful Jan 30 13:53:15.570849 coreos-metadata[1935]: Jan 30 13:53:15.502 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 30 13:53:15.570849 coreos-metadata[1935]: Jan 30 13:53:15.503 INFO Fetch successful Jan 30 13:53:15.570849 coreos-metadata[1935]: Jan 30 13:53:15.503 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 30 13:53:15.570849 coreos-metadata[1935]: Jan 30 13:53:15.504 INFO Fetch successful Jan 30 13:53:15.570849 coreos-metadata[1935]: Jan 30 13:53:15.504 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 30 13:53:15.570849 coreos-metadata[1935]: Jan 30 13:53:15.505 INFO Fetch successful Jan 30 13:53:15.570849 coreos-metadata[1935]: Jan 30 13:53:15.505 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 30 13:53:15.570849 coreos-metadata[1935]: Jan 30 13:53:15.506 INFO Fetch failed with 404: resource not found Jan 30 13:53:15.570849 coreos-metadata[1935]: Jan 30 13:53:15.507 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 30 13:53:15.570849 coreos-metadata[1935]: Jan 30 13:53:15.507 INFO Fetch successful Jan 30 13:53:15.570849 coreos-metadata[1935]: Jan 30 13:53:15.507 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 30 13:53:15.570849 coreos-metadata[1935]: Jan 30 13:53:15.508 INFO Fetch successful Jan 30 13:53:15.570849 coreos-metadata[1935]: Jan 30 13:53:15.508 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 30 13:53:15.570849 coreos-metadata[1935]: Jan 30 13:53:15.509 INFO Fetch successful Jan 30 13:53:15.570849 coreos-metadata[1935]: Jan 30 13:53:15.509 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 30 13:53:15.570849 coreos-metadata[1935]: Jan 30 13:53:15.511 INFO Fetch successful Jan 30 13:53:15.570849 coreos-metadata[1935]: Jan 30 13:53:15.511 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 30 13:53:15.570849 coreos-metadata[1935]: Jan 30 13:53:15.520 INFO Fetch successful Jan 30 13:53:15.582459 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1758) Jan 30 13:53:15.582592 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 30 13:53:15.573545 systemd-logind[1944]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 13:53:15.573570 systemd-logind[1944]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 30 13:53:15.574259 systemd-logind[1944]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:53:15.575159 systemd-logind[1944]: New seat seat0. Jan 30 13:53:15.577321 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:53:15.609184 extend-filesystems[1999]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 30 13:53:15.609184 extend-filesystems[1999]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 13:53:15.609184 extend-filesystems[1999]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 30 13:53:15.724709 extend-filesystems[1938]: Resized filesystem in /dev/nvme0n1p9 Jan 30 13:53:15.671870 dbus-daemon[1936]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 30 13:53:15.726947 bash[2000]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:53:15.621665 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:53:15.672641 dbus-daemon[1936]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.4' (uid=0 pid=1989 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 30 13:53:15.621963 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:53:15.732956 sshd_keygen[1990]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:53:15.708597 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:53:15.713252 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:53:15.726730 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 30 13:53:15.741521 systemd[1]: Starting polkit.service - Authorization Manager... Jan 30 13:53:15.769749 systemd[1]: Starting sshkeys.service... Jan 30 13:53:15.771467 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 13:53:15.786925 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:53:15.806112 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 13:53:15.814325 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 13:53:15.832314 polkitd[2049]: Started polkitd version 121 Jan 30 13:53:15.886720 polkitd[2049]: Loading rules from directory /etc/polkit-1/rules.d Jan 30 13:53:15.888017 polkitd[2049]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 30 13:53:15.892123 polkitd[2049]: Finished loading, compiling and executing 2 rules Jan 30 13:53:15.896024 dbus-daemon[1936]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 30 13:53:15.897184 polkitd[2049]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 30 13:53:15.897501 systemd[1]: Started polkit.service - Authorization Manager. Jan 30 13:53:15.910214 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:53:15.930245 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:53:15.939767 systemd[1]: Started sshd@0-172.31.27.35:22-139.178.68.195:55964.service - OpenSSH per-connection server daemon (139.178.68.195:55964). Jan 30 13:53:15.983606 locksmithd[1998]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:53:15.988735 coreos-metadata[2080]: Jan 30 13:53:15.988 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 30 13:53:15.992603 coreos-metadata[2080]: Jan 30 13:53:15.989 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 30 13:53:15.992603 coreos-metadata[2080]: Jan 30 13:53:15.990 INFO Fetch successful Jan 30 13:53:15.992603 coreos-metadata[2080]: Jan 30 13:53:15.990 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 30 13:53:15.992603 coreos-metadata[2080]: Jan 30 13:53:15.991 INFO Fetch successful Jan 30 13:53:15.993649 systemd-resolved[1813]: System hostname changed to 'ip-172-31-27-35'. Jan 30 13:53:15.993730 systemd-hostnamed[1989]: Hostname set to (transient) Jan 30 13:53:15.996580 unknown[2080]: wrote ssh authorized keys file for user: core Jan 30 13:53:16.015432 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:53:16.015676 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:53:16.027523 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:53:16.038594 update-ssh-keys[2130]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:53:16.040530 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 13:53:16.048191 systemd[1]: Finished sshkeys.service. Jan 30 13:53:16.070650 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:53:16.086066 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:53:16.096206 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:53:16.098894 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:53:16.126568 containerd[1963]: time="2025-01-30T13:53:16.126414586Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 13:53:16.155049 containerd[1963]: time="2025-01-30T13:53:16.154938490Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:53:16.156611 containerd[1963]: time="2025-01-30T13:53:16.156569175Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:53:16.156611 containerd[1963]: time="2025-01-30T13:53:16.156605723Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:53:16.156723 containerd[1963]: time="2025-01-30T13:53:16.156627859Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:53:16.156855 containerd[1963]: time="2025-01-30T13:53:16.156831967Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:53:16.156919 containerd[1963]: time="2025-01-30T13:53:16.156860665Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:53:16.156958 containerd[1963]: time="2025-01-30T13:53:16.156936974Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:53:16.156995 containerd[1963]: time="2025-01-30T13:53:16.156957798Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:53:16.157171 containerd[1963]: time="2025-01-30T13:53:16.157142549Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:53:16.157171 containerd[1963]: time="2025-01-30T13:53:16.157164352Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:53:16.157307 containerd[1963]: time="2025-01-30T13:53:16.157184830Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:53:16.157550 containerd[1963]: time="2025-01-30T13:53:16.157200727Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:53:16.157661 containerd[1963]: time="2025-01-30T13:53:16.157638310Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:53:16.158414 containerd[1963]: time="2025-01-30T13:53:16.158379860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:53:16.158570 containerd[1963]: time="2025-01-30T13:53:16.158541651Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:53:16.158570 containerd[1963]: time="2025-01-30T13:53:16.158566030Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:53:16.159306 containerd[1963]: time="2025-01-30T13:53:16.159273759Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:53:16.159383 containerd[1963]: time="2025-01-30T13:53:16.159345863Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:53:16.169400 containerd[1963]: time="2025-01-30T13:53:16.169349811Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:53:16.169527 containerd[1963]: time="2025-01-30T13:53:16.169430286Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:53:16.169527 containerd[1963]: time="2025-01-30T13:53:16.169455934Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:53:16.169527 containerd[1963]: time="2025-01-30T13:53:16.169475453Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:53:16.169527 containerd[1963]: time="2025-01-30T13:53:16.169497533Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:53:16.169692 containerd[1963]: time="2025-01-30T13:53:16.169663454Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:53:16.170033 containerd[1963]: time="2025-01-30T13:53:16.170007570Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:53:16.170165 containerd[1963]: time="2025-01-30T13:53:16.170142466Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:53:16.170233 containerd[1963]: time="2025-01-30T13:53:16.170167760Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:53:16.170233 containerd[1963]: time="2025-01-30T13:53:16.170198893Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:53:16.170233 containerd[1963]: time="2025-01-30T13:53:16.170219807Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:53:16.170339 containerd[1963]: time="2025-01-30T13:53:16.170242523Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:53:16.170339 containerd[1963]: time="2025-01-30T13:53:16.170261986Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:53:16.170339 containerd[1963]: time="2025-01-30T13:53:16.170282999Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:53:16.170339 containerd[1963]: time="2025-01-30T13:53:16.170304800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:53:16.170339 containerd[1963]: time="2025-01-30T13:53:16.170325488Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:53:16.170517 containerd[1963]: time="2025-01-30T13:53:16.170344536Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:53:16.170517 containerd[1963]: time="2025-01-30T13:53:16.170362730Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:53:16.170517 containerd[1963]: time="2025-01-30T13:53:16.170397498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:53:16.170517 containerd[1963]: time="2025-01-30T13:53:16.170417997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:53:16.170517 containerd[1963]: time="2025-01-30T13:53:16.170437043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:53:16.170517 containerd[1963]: time="2025-01-30T13:53:16.170464473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:53:16.170517 containerd[1963]: time="2025-01-30T13:53:16.170483123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:53:16.170517 containerd[1963]: time="2025-01-30T13:53:16.170502307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:53:16.170913 containerd[1963]: time="2025-01-30T13:53:16.170520138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:53:16.170913 containerd[1963]: time="2025-01-30T13:53:16.170539564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:53:16.170913 containerd[1963]: time="2025-01-30T13:53:16.170559524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:53:16.170913 containerd[1963]: time="2025-01-30T13:53:16.170580627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:53:16.170913 containerd[1963]: time="2025-01-30T13:53:16.170602599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:53:16.170913 containerd[1963]: time="2025-01-30T13:53:16.170621557Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:53:16.170913 containerd[1963]: time="2025-01-30T13:53:16.170640001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:53:16.170913 containerd[1963]: time="2025-01-30T13:53:16.170662121Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:53:16.170913 containerd[1963]: time="2025-01-30T13:53:16.170691387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:53:16.170913 containerd[1963]: time="2025-01-30T13:53:16.170710370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:53:16.170913 containerd[1963]: time="2025-01-30T13:53:16.170727223Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:53:16.170913 containerd[1963]: time="2025-01-30T13:53:16.170836330Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:53:16.170913 containerd[1963]: time="2025-01-30T13:53:16.170882113Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:53:16.170913 containerd[1963]: time="2025-01-30T13:53:16.170901005Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:53:16.171406 containerd[1963]: time="2025-01-30T13:53:16.170920122Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:53:16.171406 containerd[1963]: time="2025-01-30T13:53:16.170934855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:53:16.171406 containerd[1963]: time="2025-01-30T13:53:16.170956470Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:53:16.171406 containerd[1963]: time="2025-01-30T13:53:16.170971564Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:53:16.171406 containerd[1963]: time="2025-01-30T13:53:16.170995033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:53:16.171581 containerd[1963]: time="2025-01-30T13:53:16.171384446Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:53:16.171581 containerd[1963]: time="2025-01-30T13:53:16.171467594Z" level=info msg="Connect containerd service" Jan 30 13:53:16.171581 containerd[1963]: time="2025-01-30T13:53:16.171513951Z" level=info msg="using legacy CRI server" Jan 30 13:53:16.171581 containerd[1963]: time="2025-01-30T13:53:16.171524890Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:53:16.171918 containerd[1963]: time="2025-01-30T13:53:16.171640390Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:53:16.172414 containerd[1963]: time="2025-01-30T13:53:16.172375392Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:53:16.172738 containerd[1963]: time="2025-01-30T13:53:16.172592755Z" level=info msg="Start subscribing containerd event" Jan 30 13:53:16.172738 containerd[1963]: time="2025-01-30T13:53:16.172657874Z" level=info msg="Start recovering state" Jan 30 13:53:16.172849 containerd[1963]: time="2025-01-30T13:53:16.172835564Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:53:16.172913 containerd[1963]: time="2025-01-30T13:53:16.172892822Z" level=info msg="Start event monitor" Jan 30 13:53:16.172981 containerd[1963]: time="2025-01-30T13:53:16.172929331Z" level=info msg="Start snapshots syncer" Jan 30 13:53:16.172981 containerd[1963]: time="2025-01-30T13:53:16.172953236Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:53:16.172981 containerd[1963]: time="2025-01-30T13:53:16.172964729Z" level=info msg="Start streaming server" Jan 30 13:53:16.175026 containerd[1963]: time="2025-01-30T13:53:16.173644326Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:53:16.175026 containerd[1963]: time="2025-01-30T13:53:16.173733699Z" level=info msg="containerd successfully booted in 0.048314s" Jan 30 13:53:16.174931 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:53:16.198373 sshd[2127]: Accepted publickey for core from 139.178.68.195 port 55964 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:53:16.201373 sshd[2127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:16.210317 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:53:16.216141 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:53:16.220971 systemd-logind[1944]: New session 1 of user core. Jan 30 13:53:16.234344 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:53:16.246146 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:53:16.250440 (systemd)[2158]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:53:16.342611 ntpd[1940]: bind(24) AF_INET6 fe80::48e:9dff:fe3d:f2fb%2#123 flags 0x11 failed: Cannot assign requested address Jan 30 13:53:16.343147 ntpd[1940]: 30 Jan 13:53:16 ntpd[1940]: bind(24) AF_INET6 fe80::48e:9dff:fe3d:f2fb%2#123 flags 0x11 failed: Cannot assign requested address Jan 30 13:53:16.343147 ntpd[1940]: 30 Jan 13:53:16 ntpd[1940]: unable to create socket on eth0 (6) for fe80::48e:9dff:fe3d:f2fb%2#123 Jan 30 13:53:16.343147 ntpd[1940]: 30 Jan 13:53:16 ntpd[1940]: failed to init interface for address fe80::48e:9dff:fe3d:f2fb%2 Jan 30 13:53:16.342655 ntpd[1940]: unable to create socket on eth0 (6) for fe80::48e:9dff:fe3d:f2fb%2#123 Jan 30 13:53:16.342671 ntpd[1940]: failed to init interface for address fe80::48e:9dff:fe3d:f2fb%2 Jan 30 13:53:16.362672 systemd[2158]: Queued start job for default target default.target. Jan 30 13:53:16.369961 systemd[2158]: Created slice app.slice - User Application Slice. Jan 30 13:53:16.370002 systemd[2158]: Reached target paths.target - Paths. Jan 30 13:53:16.370022 systemd[2158]: Reached target timers.target - Timers. Jan 30 13:53:16.371385 systemd[2158]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:53:16.383547 systemd[2158]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:53:16.383695 systemd[2158]: Reached target sockets.target - Sockets. Jan 30 13:53:16.383715 systemd[2158]: Reached target basic.target - Basic System. Jan 30 13:53:16.383763 systemd[2158]: Reached target default.target - Main User Target. Jan 30 13:53:16.383800 systemd[2158]: Startup finished in 126ms. Jan 30 13:53:16.384168 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:53:16.387854 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:53:16.539189 systemd[1]: Started sshd@1-172.31.27.35:22-139.178.68.195:55974.service - OpenSSH per-connection server daemon (139.178.68.195:55974). Jan 30 13:53:16.699301 sshd[2169]: Accepted publickey for core from 139.178.68.195 port 55974 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:53:16.701675 sshd[2169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:16.706867 systemd-logind[1944]: New session 2 of user core. Jan 30 13:53:16.716018 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:53:16.751010 systemd-networkd[1801]: eth0: Gained IPv6LL Jan 30 13:53:16.753993 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:53:16.756375 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:53:16.767847 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 30 13:53:16.783337 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:53:16.790317 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:53:16.857560 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:53:16.862951 sshd[2169]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:16.869679 systemd[1]: sshd@1-172.31.27.35:22-139.178.68.195:55974.service: Deactivated successfully. Jan 30 13:53:16.869964 systemd-logind[1944]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:53:16.873782 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:53:16.876317 systemd-logind[1944]: Removed session 2. Jan 30 13:53:16.888082 systemd[1]: Started sshd@2-172.31.27.35:22-139.178.68.195:55984.service - OpenSSH per-connection server daemon (139.178.68.195:55984). Jan 30 13:53:16.900574 amazon-ssm-agent[2173]: Initializing new seelog logger Jan 30 13:53:16.903050 amazon-ssm-agent[2173]: New Seelog Logger Creation Complete Jan 30 13:53:16.903050 amazon-ssm-agent[2173]: 2025/01/30 13:53:16 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:53:16.903050 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:53:16.903050 amazon-ssm-agent[2173]: 2025/01/30 13:53:16 processing appconfig overrides Jan 30 13:53:16.903050 amazon-ssm-agent[2173]: 2025/01/30 13:53:16 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:53:16.903050 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:53:16.903050 amazon-ssm-agent[2173]: 2025/01/30 13:53:16 processing appconfig overrides Jan 30 13:53:16.903050 amazon-ssm-agent[2173]: 2025/01/30 13:53:16 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:53:16.903050 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:53:16.903050 amazon-ssm-agent[2173]: 2025/01/30 13:53:16 processing appconfig overrides Jan 30 13:53:16.903050 amazon-ssm-agent[2173]: 2025-01-30 13:53:16 INFO Proxy environment variables: Jan 30 13:53:16.905257 amazon-ssm-agent[2173]: 2025/01/30 13:53:16 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:53:16.905257 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:53:16.905380 amazon-ssm-agent[2173]: 2025/01/30 13:53:16 processing appconfig overrides Jan 30 13:53:17.002147 amazon-ssm-agent[2173]: 2025-01-30 13:53:16 INFO no_proxy: Jan 30 13:53:17.070724 sshd[2192]: Accepted publickey for core from 139.178.68.195 port 55984 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:53:17.072718 sshd[2192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:17.081752 systemd-logind[1944]: New session 3 of user core. Jan 30 13:53:17.086015 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:53:17.099933 amazon-ssm-agent[2173]: 2025-01-30 13:53:16 INFO https_proxy: Jan 30 13:53:17.198390 amazon-ssm-agent[2173]: 2025-01-30 13:53:16 INFO http_proxy: Jan 30 13:53:17.217831 sshd[2192]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:17.223584 systemd-logind[1944]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:53:17.224344 systemd[1]: sshd@2-172.31.27.35:22-139.178.68.195:55984.service: Deactivated successfully. Jan 30 13:53:17.229731 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:53:17.233056 systemd-logind[1944]: Removed session 3. Jan 30 13:53:17.297354 amazon-ssm-agent[2173]: 2025-01-30 13:53:16 INFO Checking if agent identity type OnPrem can be assumed Jan 30 13:53:17.380098 amazon-ssm-agent[2173]: 2025-01-30 13:53:16 INFO Checking if agent identity type EC2 can be assumed Jan 30 13:53:17.380098 amazon-ssm-agent[2173]: 2025-01-30 13:53:16 INFO Agent will take identity from EC2 Jan 30 13:53:17.380098 amazon-ssm-agent[2173]: 2025-01-30 13:53:16 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 30 13:53:17.380098 amazon-ssm-agent[2173]: 2025-01-30 13:53:16 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 30 13:53:17.380329 amazon-ssm-agent[2173]: 2025-01-30 13:53:16 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 30 13:53:17.380329 amazon-ssm-agent[2173]: 2025-01-30 13:53:16 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 30 13:53:17.380329 amazon-ssm-agent[2173]: 2025-01-30 13:53:16 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 30 13:53:17.380329 amazon-ssm-agent[2173]: 2025-01-30 13:53:16 INFO [amazon-ssm-agent] Starting Core Agent Jan 30 13:53:17.380329 amazon-ssm-agent[2173]: 2025-01-30 13:53:16 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 30 13:53:17.380329 amazon-ssm-agent[2173]: 2025-01-30 13:53:16 INFO [Registrar] Starting registrar module Jan 30 13:53:17.380329 amazon-ssm-agent[2173]: 2025-01-30 13:53:16 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 30 13:53:17.380329 amazon-ssm-agent[2173]: 2025-01-30 13:53:17 INFO [EC2Identity] EC2 registration was successful. Jan 30 13:53:17.380329 amazon-ssm-agent[2173]: 2025-01-30 13:53:17 INFO [CredentialRefresher] credentialRefresher has started Jan 30 13:53:17.380329 amazon-ssm-agent[2173]: 2025-01-30 13:53:17 INFO [CredentialRefresher] Starting credentials refresher loop Jan 30 13:53:17.380329 amazon-ssm-agent[2173]: 2025-01-30 13:53:17 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 30 13:53:17.396217 amazon-ssm-agent[2173]: 2025-01-30 13:53:17 INFO [CredentialRefresher] Next credential rotation will be in 30.44160361055 minutes Jan 30 13:53:18.420135 amazon-ssm-agent[2173]: 2025-01-30 13:53:18 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 30 13:53:18.521525 amazon-ssm-agent[2173]: 2025-01-30 13:53:18 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2203) started Jan 30 13:53:18.621746 amazon-ssm-agent[2173]: 2025-01-30 13:53:18 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 30 13:53:19.003049 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:53:19.005887 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:53:19.008946 systemd[1]: Startup finished in 582ms (kernel) + 7.121s (initrd) + 8.245s (userspace) = 15.949s. Jan 30 13:53:19.013374 (kubelet)[2218]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:53:19.342660 ntpd[1940]: Listen normally on 7 eth0 [fe80::48e:9dff:fe3d:f2fb%2]:123 Jan 30 13:53:19.343151 ntpd[1940]: 30 Jan 13:53:19 ntpd[1940]: Listen normally on 7 eth0 [fe80::48e:9dff:fe3d:f2fb%2]:123 Jan 30 13:53:20.285158 kubelet[2218]: E0130 13:53:20.285070 2218 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:53:20.288241 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:53:20.288431 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:53:22.869021 systemd-resolved[1813]: Clock change detected. Flushing caches. Jan 30 13:53:27.774221 systemd[1]: Started sshd@3-172.31.27.35:22-139.178.68.195:45146.service - OpenSSH per-connection server daemon (139.178.68.195:45146). Jan 30 13:53:27.957956 sshd[2230]: Accepted publickey for core from 139.178.68.195 port 45146 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:53:27.959460 sshd[2230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:27.967466 systemd-logind[1944]: New session 4 of user core. Jan 30 13:53:27.974234 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:53:28.098570 sshd[2230]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:28.102308 systemd[1]: sshd@3-172.31.27.35:22-139.178.68.195:45146.service: Deactivated successfully. Jan 30 13:53:28.105358 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:53:28.106764 systemd-logind[1944]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:53:28.107944 systemd-logind[1944]: Removed session 4. Jan 30 13:53:28.139385 systemd[1]: Started sshd@4-172.31.27.35:22-139.178.68.195:45156.service - OpenSSH per-connection server daemon (139.178.68.195:45156). Jan 30 13:53:28.292598 sshd[2237]: Accepted publickey for core from 139.178.68.195 port 45156 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:53:28.294408 sshd[2237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:28.299879 systemd-logind[1944]: New session 5 of user core. Jan 30 13:53:28.309269 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:53:28.423141 sshd[2237]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:28.427498 systemd[1]: sshd@4-172.31.27.35:22-139.178.68.195:45156.service: Deactivated successfully. Jan 30 13:53:28.429568 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:53:28.430434 systemd-logind[1944]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:53:28.431849 systemd-logind[1944]: Removed session 5. Jan 30 13:53:28.464840 systemd[1]: Started sshd@5-172.31.27.35:22-139.178.68.195:45164.service - OpenSSH per-connection server daemon (139.178.68.195:45164). Jan 30 13:53:28.624471 sshd[2244]: Accepted publickey for core from 139.178.68.195 port 45164 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:53:28.626051 sshd[2244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:28.631930 systemd-logind[1944]: New session 6 of user core. Jan 30 13:53:28.641175 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:53:28.762124 sshd[2244]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:28.765961 systemd[1]: sshd@5-172.31.27.35:22-139.178.68.195:45164.service: Deactivated successfully. Jan 30 13:53:28.767808 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:53:28.769398 systemd-logind[1944]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:53:28.770579 systemd-logind[1944]: Removed session 6. Jan 30 13:53:28.801591 systemd[1]: Started sshd@6-172.31.27.35:22-139.178.68.195:45178.service - OpenSSH per-connection server daemon (139.178.68.195:45178). Jan 30 13:53:28.957390 sshd[2251]: Accepted publickey for core from 139.178.68.195 port 45178 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:53:28.958971 sshd[2251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:28.965149 systemd-logind[1944]: New session 7 of user core. Jan 30 13:53:28.974181 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:53:29.106263 sudo[2254]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:53:29.106729 sudo[2254]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:53:29.136339 sudo[2254]: pam_unix(sudo:session): session closed for user root Jan 30 13:53:29.162127 sshd[2251]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:29.170207 systemd[1]: sshd@6-172.31.27.35:22-139.178.68.195:45178.service: Deactivated successfully. Jan 30 13:53:29.173492 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:53:29.175952 systemd-logind[1944]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:53:29.177918 systemd-logind[1944]: Removed session 7. Jan 30 13:53:29.202567 systemd[1]: Started sshd@7-172.31.27.35:22-139.178.68.195:45190.service - OpenSSH per-connection server daemon (139.178.68.195:45190). Jan 30 13:53:29.362043 sshd[2259]: Accepted publickey for core from 139.178.68.195 port 45190 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:53:29.364077 sshd[2259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:29.369716 systemd-logind[1944]: New session 8 of user core. Jan 30 13:53:29.379208 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:53:29.479908 sudo[2263]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:53:29.480345 sudo[2263]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:53:29.484653 sudo[2263]: pam_unix(sudo:session): session closed for user root Jan 30 13:53:29.492585 sudo[2262]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 13:53:29.492870 sudo[2262]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:53:29.508366 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 13:53:29.510242 auditctl[2266]: No rules Jan 30 13:53:29.510795 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:53:29.511046 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 13:53:29.514192 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:53:29.546908 augenrules[2284]: No rules Jan 30 13:53:29.548330 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:53:29.550355 sudo[2262]: pam_unix(sudo:session): session closed for user root Jan 30 13:53:29.574046 sshd[2259]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:29.577321 systemd[1]: sshd@7-172.31.27.35:22-139.178.68.195:45190.service: Deactivated successfully. Jan 30 13:53:29.579279 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:53:29.580871 systemd-logind[1944]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:53:29.582240 systemd-logind[1944]: Removed session 8. Jan 30 13:53:29.612361 systemd[1]: Started sshd@8-172.31.27.35:22-139.178.68.195:45198.service - OpenSSH per-connection server daemon (139.178.68.195:45198). Jan 30 13:53:29.764191 sshd[2292]: Accepted publickey for core from 139.178.68.195 port 45198 ssh2: RSA SHA256:sOLO7fjvEAJcFfGBVs/wB6N/GT+9eFp4KClrVR9nxhs Jan 30 13:53:29.765877 sshd[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:29.771048 systemd-logind[1944]: New session 9 of user core. Jan 30 13:53:29.779712 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:53:29.873824 sudo[2295]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:53:29.874228 sudo[2295]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:53:30.837896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:53:30.845331 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:53:30.977064 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:53:30.977184 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:53:30.977511 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:53:30.986467 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:53:31.023733 systemd[1]: Reloading requested from client PID 2332 ('systemctl') (unit session-9.scope)... Jan 30 13:53:31.023755 systemd[1]: Reloading... Jan 30 13:53:31.173999 zram_generator::config[2372]: No configuration found. Jan 30 13:53:31.308753 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:53:31.402300 systemd[1]: Reloading finished in 377 ms. Jan 30 13:53:31.454723 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:53:31.454850 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:53:31.455163 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:53:31.474031 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:53:31.660728 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:53:31.675433 (kubelet)[2431]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:53:31.718950 kubelet[2431]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:53:31.718950 kubelet[2431]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:53:31.718950 kubelet[2431]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:53:31.721328 kubelet[2431]: I0130 13:53:31.721273 2431 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:53:32.130073 kubelet[2431]: I0130 13:53:32.129941 2431 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 13:53:32.130073 kubelet[2431]: I0130 13:53:32.129971 2431 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:53:32.130606 kubelet[2431]: I0130 13:53:32.130312 2431 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 13:53:32.209077 kubelet[2431]: I0130 13:53:32.209039 2431 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:53:32.224453 kubelet[2431]: E0130 13:53:32.224406 2431 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:53:32.224453 kubelet[2431]: I0130 13:53:32.224437 2431 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:53:32.242499 kubelet[2431]: I0130 13:53:32.242396 2431 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:53:32.248683 kubelet[2431]: I0130 13:53:32.247714 2431 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 13:53:32.248683 kubelet[2431]: I0130 13:53:32.247949 2431 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:53:32.248683 kubelet[2431]: I0130 13:53:32.248010 2431 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.27.35","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:53:32.248683 kubelet[2431]: I0130 13:53:32.248375 2431 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:53:32.249941 kubelet[2431]: I0130 13:53:32.248387 2431 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 13:53:32.249941 kubelet[2431]: I0130 13:53:32.248530 2431 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:53:32.258181 kubelet[2431]: I0130 13:53:32.258145 2431 kubelet.go:408] "Attempting to sync node with API server" Jan 30 13:53:32.258382 kubelet[2431]: I0130 13:53:32.258369 2431 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:53:32.258492 kubelet[2431]: I0130 13:53:32.258483 2431 kubelet.go:314] "Adding apiserver pod source" Jan 30 13:53:32.259060 kubelet[2431]: I0130 13:53:32.259038 2431 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:53:32.259442 kubelet[2431]: E0130 13:53:32.259406 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:32.259536 kubelet[2431]: E0130 13:53:32.259454 2431 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:32.269969 kubelet[2431]: I0130 13:53:32.269943 2431 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:53:32.272662 kubelet[2431]: I0130 13:53:32.272635 2431 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:53:32.272794 kubelet[2431]: W0130 13:53:32.272716 2431 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:53:32.273502 kubelet[2431]: I0130 13:53:32.273470 2431 server.go:1269] "Started kubelet" Jan 30 13:53:32.281774 kubelet[2431]: I0130 13:53:32.280284 2431 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:53:32.281774 kubelet[2431]: I0130 13:53:32.280731 2431 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:53:32.281774 kubelet[2431]: I0130 13:53:32.281195 2431 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:53:32.281774 kubelet[2431]: I0130 13:53:32.281612 2431 server.go:460] "Adding debug handlers to kubelet server" Jan 30 13:53:32.284570 kubelet[2431]: I0130 13:53:32.284548 2431 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:53:32.285139 kubelet[2431]: I0130 13:53:32.285109 2431 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:53:32.287888 kubelet[2431]: E0130 13:53:32.287535 2431 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:53:32.288537 kubelet[2431]: I0130 13:53:32.288023 2431 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 13:53:32.288537 kubelet[2431]: I0130 13:53:32.288163 2431 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 13:53:32.288537 kubelet[2431]: I0130 13:53:32.288221 2431 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:53:32.289123 kubelet[2431]: E0130 13:53:32.288845 2431 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.27.35\" not found" Jan 30 13:53:32.289123 kubelet[2431]: I0130 13:53:32.289032 2431 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:53:32.289209 kubelet[2431]: I0130 13:53:32.289139 2431 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:53:32.291936 kubelet[2431]: I0130 13:53:32.291168 2431 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:53:32.311329 kubelet[2431]: E0130 13:53:32.311294 2431 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.27.35\" not found" node="172.31.27.35" Jan 30 13:53:32.318530 kubelet[2431]: I0130 13:53:32.318509 2431 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:53:32.318917 kubelet[2431]: I0130 13:53:32.318694 2431 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:53:32.318917 kubelet[2431]: I0130 13:53:32.318717 2431 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:53:32.323780 kubelet[2431]: I0130 13:53:32.323663 2431 policy_none.go:49] "None policy: Start" Jan 30 13:53:32.324941 kubelet[2431]: I0130 13:53:32.324750 2431 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:53:32.325116 kubelet[2431]: I0130 13:53:32.324865 2431 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:53:32.337698 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:53:32.349581 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:53:32.356189 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:53:32.361948 kubelet[2431]: I0130 13:53:32.361913 2431 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:53:32.362150 kubelet[2431]: I0130 13:53:32.362129 2431 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:53:32.362224 kubelet[2431]: I0130 13:53:32.362146 2431 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:53:32.363934 kubelet[2431]: I0130 13:53:32.363912 2431 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:53:32.370124 kubelet[2431]: E0130 13:53:32.370060 2431 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.27.35\" not found" Jan 30 13:53:32.387324 kubelet[2431]: I0130 13:53:32.387199 2431 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:53:32.392154 kubelet[2431]: I0130 13:53:32.392112 2431 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:53:32.392154 kubelet[2431]: I0130 13:53:32.392149 2431 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:53:32.392303 kubelet[2431]: I0130 13:53:32.392174 2431 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 13:53:32.392303 kubelet[2431]: E0130 13:53:32.392223 2431 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 30 13:53:32.463128 kubelet[2431]: I0130 13:53:32.463101 2431 kubelet_node_status.go:72] "Attempting to register node" node="172.31.27.35" Jan 30 13:53:32.486071 kubelet[2431]: I0130 13:53:32.486027 2431 kubelet_node_status.go:75] "Successfully registered node" node="172.31.27.35" Jan 30 13:53:32.617340 kubelet[2431]: I0130 13:53:32.617284 2431 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 30 13:53:32.617856 containerd[1963]: time="2025-01-30T13:53:32.617664955Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:53:32.618267 kubelet[2431]: I0130 13:53:32.617928 2431 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 30 13:53:32.746402 sudo[2295]: pam_unix(sudo:session): session closed for user root Jan 30 13:53:32.769848 sshd[2292]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:32.792386 systemd[1]: sshd@8-172.31.27.35:22-139.178.68.195:45198.service: Deactivated successfully. Jan 30 13:53:32.812649 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:53:32.814555 systemd-logind[1944]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:53:32.816118 systemd-logind[1944]: Removed session 9. Jan 30 13:53:33.132764 kubelet[2431]: I0130 13:53:33.132638 2431 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 13:53:33.133354 kubelet[2431]: W0130 13:53:33.132838 2431 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 13:53:33.133354 kubelet[2431]: W0130 13:53:33.132883 2431 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 13:53:33.133354 kubelet[2431]: W0130 13:53:33.133168 2431 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 13:53:33.260179 kubelet[2431]: I0130 13:53:33.260122 2431 apiserver.go:52] "Watching apiserver" Jan 30 13:53:33.260440 kubelet[2431]: E0130 13:53:33.260130 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:33.282727 systemd[1]: Created slice kubepods-burstable-podc768a57c_db80_4717_b3a5_af2e7334b2bd.slice - libcontainer container kubepods-burstable-podc768a57c_db80_4717_b3a5_af2e7334b2bd.slice. Jan 30 13:53:33.289559 kubelet[2431]: I0130 13:53:33.289510 2431 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 13:53:33.296641 kubelet[2431]: I0130 13:53:33.296584 2431 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b15fdc33-7b49-4839-9be1-2abae2f2124b-xtables-lock\") pod \"kube-proxy-8mkcj\" (UID: \"b15fdc33-7b49-4839-9be1-2abae2f2124b\") " pod="kube-system/kube-proxy-8mkcj" Jan 30 13:53:33.296798 kubelet[2431]: I0130 13:53:33.296673 2431 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c768a57c-db80-4717-b3a5-af2e7334b2bd-cni-path\") pod \"cilium-k6msd\" (UID: \"c768a57c-db80-4717-b3a5-af2e7334b2bd\") " pod="kube-system/cilium-k6msd" Jan 30 13:53:33.296798 kubelet[2431]: I0130 13:53:33.296701 2431 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c768a57c-db80-4717-b3a5-af2e7334b2bd-cilium-run\") pod \"cilium-k6msd\" (UID: \"c768a57c-db80-4717-b3a5-af2e7334b2bd\") " pod="kube-system/cilium-k6msd" Jan 30 13:53:33.296798 kubelet[2431]: I0130 13:53:33.296722 2431 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c768a57c-db80-4717-b3a5-af2e7334b2bd-hostproc\") pod \"cilium-k6msd\" (UID: \"c768a57c-db80-4717-b3a5-af2e7334b2bd\") " pod="kube-system/cilium-k6msd" Jan 30 13:53:33.296798 kubelet[2431]: I0130 13:53:33.296745 2431 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c768a57c-db80-4717-b3a5-af2e7334b2bd-hubble-tls\") pod \"cilium-k6msd\" (UID: \"c768a57c-db80-4717-b3a5-af2e7334b2bd\") " pod="kube-system/cilium-k6msd" Jan 30 13:53:33.296798 kubelet[2431]: I0130 13:53:33.296769 2431 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgvv7\" (UniqueName: \"kubernetes.io/projected/c768a57c-db80-4717-b3a5-af2e7334b2bd-kube-api-access-lgvv7\") pod \"cilium-k6msd\" (UID: \"c768a57c-db80-4717-b3a5-af2e7334b2bd\") " pod="kube-system/cilium-k6msd" Jan 30 13:53:33.296798 kubelet[2431]: I0130 13:53:33.296792 2431 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b15fdc33-7b49-4839-9be1-2abae2f2124b-kube-proxy\") pod \"kube-proxy-8mkcj\" (UID: \"b15fdc33-7b49-4839-9be1-2abae2f2124b\") " pod="kube-system/kube-proxy-8mkcj" Jan 30 13:53:33.297322 kubelet[2431]: I0130 13:53:33.296817 2431 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtzk5\" (UniqueName: \"kubernetes.io/projected/b15fdc33-7b49-4839-9be1-2abae2f2124b-kube-api-access-gtzk5\") pod \"kube-proxy-8mkcj\" (UID: \"b15fdc33-7b49-4839-9be1-2abae2f2124b\") " pod="kube-system/kube-proxy-8mkcj" Jan 30 13:53:33.297322 kubelet[2431]: I0130 13:53:33.296843 2431 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c768a57c-db80-4717-b3a5-af2e7334b2bd-cilium-config-path\") pod \"cilium-k6msd\" (UID: \"c768a57c-db80-4717-b3a5-af2e7334b2bd\") " pod="kube-system/cilium-k6msd" Jan 30 13:53:33.297322 kubelet[2431]: I0130 13:53:33.296867 2431 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c768a57c-db80-4717-b3a5-af2e7334b2bd-host-proc-sys-kernel\") pod \"cilium-k6msd\" (UID: \"c768a57c-db80-4717-b3a5-af2e7334b2bd\") " pod="kube-system/cilium-k6msd" Jan 30 13:53:33.297322 kubelet[2431]: I0130 13:53:33.296900 2431 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c768a57c-db80-4717-b3a5-af2e7334b2bd-bpf-maps\") pod \"cilium-k6msd\" (UID: \"c768a57c-db80-4717-b3a5-af2e7334b2bd\") " pod="kube-system/cilium-k6msd" Jan 30 13:53:33.297322 kubelet[2431]: I0130 13:53:33.296923 2431 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c768a57c-db80-4717-b3a5-af2e7334b2bd-lib-modules\") pod \"cilium-k6msd\" (UID: \"c768a57c-db80-4717-b3a5-af2e7334b2bd\") " pod="kube-system/cilium-k6msd" Jan 30 13:53:33.297697 kubelet[2431]: I0130 13:53:33.297148 2431 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c768a57c-db80-4717-b3a5-af2e7334b2bd-etc-cni-netd\") pod \"cilium-k6msd\" (UID: \"c768a57c-db80-4717-b3a5-af2e7334b2bd\") " pod="kube-system/cilium-k6msd" Jan 30 13:53:33.297697 kubelet[2431]: I0130 13:53:33.297218 2431 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c768a57c-db80-4717-b3a5-af2e7334b2bd-xtables-lock\") pod \"cilium-k6msd\" (UID: \"c768a57c-db80-4717-b3a5-af2e7334b2bd\") " pod="kube-system/cilium-k6msd" Jan 30 13:53:33.297697 kubelet[2431]: I0130 13:53:33.297298 2431 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c768a57c-db80-4717-b3a5-af2e7334b2bd-clustermesh-secrets\") pod \"cilium-k6msd\" (UID: \"c768a57c-db80-4717-b3a5-af2e7334b2bd\") " pod="kube-system/cilium-k6msd" Jan 30 13:53:33.297697 kubelet[2431]: I0130 13:53:33.297326 2431 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c768a57c-db80-4717-b3a5-af2e7334b2bd-host-proc-sys-net\") pod \"cilium-k6msd\" (UID: \"c768a57c-db80-4717-b3a5-af2e7334b2bd\") " pod="kube-system/cilium-k6msd" Jan 30 13:53:33.297697 kubelet[2431]: I0130 13:53:33.297388 2431 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b15fdc33-7b49-4839-9be1-2abae2f2124b-lib-modules\") pod \"kube-proxy-8mkcj\" (UID: \"b15fdc33-7b49-4839-9be1-2abae2f2124b\") " pod="kube-system/kube-proxy-8mkcj" Jan 30 13:53:33.297697 kubelet[2431]: I0130 13:53:33.297416 2431 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c768a57c-db80-4717-b3a5-af2e7334b2bd-cilium-cgroup\") pod \"cilium-k6msd\" (UID: \"c768a57c-db80-4717-b3a5-af2e7334b2bd\") " pod="kube-system/cilium-k6msd" Jan 30 13:53:33.308456 systemd[1]: Created slice kubepods-besteffort-podb15fdc33_7b49_4839_9be1_2abae2f2124b.slice - libcontainer container kubepods-besteffort-podb15fdc33_7b49_4839_9be1_2abae2f2124b.slice. Jan 30 13:53:33.606408 containerd[1963]: time="2025-01-30T13:53:33.606272642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k6msd,Uid:c768a57c-db80-4717-b3a5-af2e7334b2bd,Namespace:kube-system,Attempt:0,}" Jan 30 13:53:33.616648 containerd[1963]: time="2025-01-30T13:53:33.616029517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8mkcj,Uid:b15fdc33-7b49-4839-9be1-2abae2f2124b,Namespace:kube-system,Attempt:0,}" Jan 30 13:53:34.168346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount985857902.mount: Deactivated successfully. Jan 30 13:53:34.180838 containerd[1963]: time="2025-01-30T13:53:34.180788518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:53:34.181620 containerd[1963]: time="2025-01-30T13:53:34.181452327Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 13:53:34.181959 containerd[1963]: time="2025-01-30T13:53:34.181894593Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:53:34.183800 containerd[1963]: time="2025-01-30T13:53:34.183765761Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:53:34.184714 containerd[1963]: time="2025-01-30T13:53:34.184674812Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:53:34.186591 containerd[1963]: time="2025-01-30T13:53:34.186542230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:53:34.188428 containerd[1963]: time="2025-01-30T13:53:34.187613884Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 571.467331ms" Jan 30 13:53:34.190474 containerd[1963]: time="2025-01-30T13:53:34.190443316Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 584.063566ms" Jan 30 13:53:34.260846 kubelet[2431]: E0130 13:53:34.260688 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:34.457100 containerd[1963]: time="2025-01-30T13:53:34.453431570Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:53:34.457100 containerd[1963]: time="2025-01-30T13:53:34.453674370Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:53:34.457100 containerd[1963]: time="2025-01-30T13:53:34.453739845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:34.457100 containerd[1963]: time="2025-01-30T13:53:34.454025711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:34.472875 containerd[1963]: time="2025-01-30T13:53:34.472513672Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:53:34.472875 containerd[1963]: time="2025-01-30T13:53:34.472583408Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:53:34.472875 containerd[1963]: time="2025-01-30T13:53:34.472606894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:34.472875 containerd[1963]: time="2025-01-30T13:53:34.472731946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:34.629731 systemd[1]: run-containerd-runc-k8s.io-90376dbe11424bb864397489246987f8935504b37a0243fea22a8f4f2a973148-runc.j3OjKP.mount: Deactivated successfully. Jan 30 13:53:34.646248 systemd[1]: Started cri-containerd-90376dbe11424bb864397489246987f8935504b37a0243fea22a8f4f2a973148.scope - libcontainer container 90376dbe11424bb864397489246987f8935504b37a0243fea22a8f4f2a973148. Jan 30 13:53:34.649442 systemd[1]: Started cri-containerd-bfea481941cf8146707d188e30b2581323d1e23ab215ec54f39e273977822ba2.scope - libcontainer container bfea481941cf8146707d188e30b2581323d1e23ab215ec54f39e273977822ba2. Jan 30 13:53:34.689559 containerd[1963]: time="2025-01-30T13:53:34.689411175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k6msd,Uid:c768a57c-db80-4717-b3a5-af2e7334b2bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"90376dbe11424bb864397489246987f8935504b37a0243fea22a8f4f2a973148\"" Jan 30 13:53:34.697818 containerd[1963]: time="2025-01-30T13:53:34.697547416Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 13:53:34.703305 containerd[1963]: time="2025-01-30T13:53:34.703274554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8mkcj,Uid:b15fdc33-7b49-4839-9be1-2abae2f2124b,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfea481941cf8146707d188e30b2581323d1e23ab215ec54f39e273977822ba2\"" Jan 30 13:53:35.261035 kubelet[2431]: E0130 13:53:35.260959 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:36.263033 kubelet[2431]: E0130 13:53:36.261457 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:37.261651 kubelet[2431]: E0130 13:53:37.261602 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:38.261785 kubelet[2431]: E0130 13:53:38.261751 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:39.101868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1306323167.mount: Deactivated successfully. Jan 30 13:53:39.263517 kubelet[2431]: E0130 13:53:39.263331 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:40.264458 kubelet[2431]: E0130 13:53:40.264416 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:41.264992 kubelet[2431]: E0130 13:53:41.264919 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:41.593513 containerd[1963]: time="2025-01-30T13:53:41.592245173Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:41.593920 containerd[1963]: time="2025-01-30T13:53:41.593668918Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 30 13:53:41.594693 containerd[1963]: time="2025-01-30T13:53:41.594658523Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:41.597052 containerd[1963]: time="2025-01-30T13:53:41.597009779Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.899415926s" Jan 30 13:53:41.597052 containerd[1963]: time="2025-01-30T13:53:41.597045759Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 30 13:53:41.598855 containerd[1963]: time="2025-01-30T13:53:41.598268738Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 30 13:53:41.599947 containerd[1963]: time="2025-01-30T13:53:41.599899476Z" level=info msg="CreateContainer within sandbox \"90376dbe11424bb864397489246987f8935504b37a0243fea22a8f4f2a973148\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:53:41.620840 containerd[1963]: time="2025-01-30T13:53:41.620789885Z" level=info msg="CreateContainer within sandbox \"90376dbe11424bb864397489246987f8935504b37a0243fea22a8f4f2a973148\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"18bc5a0f85c1c187cc311c3c42eb5e247daa7ab720d9d231b2846fd20914f319\"" Jan 30 13:53:41.622035 containerd[1963]: time="2025-01-30T13:53:41.621997570Z" level=info msg="StartContainer for \"18bc5a0f85c1c187cc311c3c42eb5e247daa7ab720d9d231b2846fd20914f319\"" Jan 30 13:53:41.661226 systemd[1]: Started cri-containerd-18bc5a0f85c1c187cc311c3c42eb5e247daa7ab720d9d231b2846fd20914f319.scope - libcontainer container 18bc5a0f85c1c187cc311c3c42eb5e247daa7ab720d9d231b2846fd20914f319. Jan 30 13:53:41.692422 containerd[1963]: time="2025-01-30T13:53:41.692364136Z" level=info msg="StartContainer for \"18bc5a0f85c1c187cc311c3c42eb5e247daa7ab720d9d231b2846fd20914f319\" returns successfully" Jan 30 13:53:41.706722 systemd[1]: cri-containerd-18bc5a0f85c1c187cc311c3c42eb5e247daa7ab720d9d231b2846fd20914f319.scope: Deactivated successfully. Jan 30 13:53:41.890384 containerd[1963]: time="2025-01-30T13:53:41.890234786Z" level=info msg="shim disconnected" id=18bc5a0f85c1c187cc311c3c42eb5e247daa7ab720d9d231b2846fd20914f319 namespace=k8s.io Jan 30 13:53:41.890384 containerd[1963]: time="2025-01-30T13:53:41.890294243Z" level=warning msg="cleaning up after shim disconnected" id=18bc5a0f85c1c187cc311c3c42eb5e247daa7ab720d9d231b2846fd20914f319 namespace=k8s.io Jan 30 13:53:41.890384 containerd[1963]: time="2025-01-30T13:53:41.890305470Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:53:42.265420 kubelet[2431]: E0130 13:53:42.265381 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:42.467724 containerd[1963]: time="2025-01-30T13:53:42.467675542Z" level=info msg="CreateContainer within sandbox \"90376dbe11424bb864397489246987f8935504b37a0243fea22a8f4f2a973148\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:53:42.492767 containerd[1963]: time="2025-01-30T13:53:42.492715978Z" level=info msg="CreateContainer within sandbox \"90376dbe11424bb864397489246987f8935504b37a0243fea22a8f4f2a973148\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0210b39c53140e90758d5b291eb0f7fdec7037fe5d36c3c8e017222b8bc28542\"" Jan 30 13:53:42.493706 containerd[1963]: time="2025-01-30T13:53:42.493669843Z" level=info msg="StartContainer for \"0210b39c53140e90758d5b291eb0f7fdec7037fe5d36c3c8e017222b8bc28542\"" Jan 30 13:53:42.559312 systemd[1]: Started cri-containerd-0210b39c53140e90758d5b291eb0f7fdec7037fe5d36c3c8e017222b8bc28542.scope - libcontainer container 0210b39c53140e90758d5b291eb0f7fdec7037fe5d36c3c8e017222b8bc28542. Jan 30 13:53:42.628400 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-18bc5a0f85c1c187cc311c3c42eb5e247daa7ab720d9d231b2846fd20914f319-rootfs.mount: Deactivated successfully. Jan 30 13:53:42.630994 containerd[1963]: time="2025-01-30T13:53:42.628732099Z" level=info msg="StartContainer for \"0210b39c53140e90758d5b291eb0f7fdec7037fe5d36c3c8e017222b8bc28542\" returns successfully" Jan 30 13:53:42.643265 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:53:42.643731 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:53:42.644146 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:53:42.655436 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:53:42.657448 systemd[1]: cri-containerd-0210b39c53140e90758d5b291eb0f7fdec7037fe5d36c3c8e017222b8bc28542.scope: Deactivated successfully. Jan 30 13:53:42.695583 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:53:42.719866 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0210b39c53140e90758d5b291eb0f7fdec7037fe5d36c3c8e017222b8bc28542-rootfs.mount: Deactivated successfully. Jan 30 13:53:42.757962 containerd[1963]: time="2025-01-30T13:53:42.757899779Z" level=info msg="shim disconnected" id=0210b39c53140e90758d5b291eb0f7fdec7037fe5d36c3c8e017222b8bc28542 namespace=k8s.io Jan 30 13:53:42.758492 containerd[1963]: time="2025-01-30T13:53:42.758300860Z" level=warning msg="cleaning up after shim disconnected" id=0210b39c53140e90758d5b291eb0f7fdec7037fe5d36c3c8e017222b8bc28542 namespace=k8s.io Jan 30 13:53:42.758492 containerd[1963]: time="2025-01-30T13:53:42.758327476Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:53:43.023534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1399545654.mount: Deactivated successfully. Jan 30 13:53:43.266485 kubelet[2431]: E0130 13:53:43.266330 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:43.464348 containerd[1963]: time="2025-01-30T13:53:43.464250657Z" level=info msg="CreateContainer within sandbox \"90376dbe11424bb864397489246987f8935504b37a0243fea22a8f4f2a973148\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:53:43.488477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2169503405.mount: Deactivated successfully. Jan 30 13:53:43.507150 containerd[1963]: time="2025-01-30T13:53:43.507098158Z" level=info msg="CreateContainer within sandbox \"90376dbe11424bb864397489246987f8935504b37a0243fea22a8f4f2a973148\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1fe2c9293acce427770100ec60394d923d35abb31dd021bcfbb2a554a2b1a7f7\"" Jan 30 13:53:43.508487 containerd[1963]: time="2025-01-30T13:53:43.508439153Z" level=info msg="StartContainer for \"1fe2c9293acce427770100ec60394d923d35abb31dd021bcfbb2a554a2b1a7f7\"" Jan 30 13:53:43.565354 systemd[1]: Started cri-containerd-1fe2c9293acce427770100ec60394d923d35abb31dd021bcfbb2a554a2b1a7f7.scope - libcontainer container 1fe2c9293acce427770100ec60394d923d35abb31dd021bcfbb2a554a2b1a7f7. Jan 30 13:53:43.621996 containerd[1963]: time="2025-01-30T13:53:43.621855204Z" level=info msg="StartContainer for \"1fe2c9293acce427770100ec60394d923d35abb31dd021bcfbb2a554a2b1a7f7\" returns successfully" Jan 30 13:53:43.626175 systemd[1]: cri-containerd-1fe2c9293acce427770100ec60394d923d35abb31dd021bcfbb2a554a2b1a7f7.scope: Deactivated successfully. Jan 30 13:53:43.671935 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1fe2c9293acce427770100ec60394d923d35abb31dd021bcfbb2a554a2b1a7f7-rootfs.mount: Deactivated successfully. Jan 30 13:53:43.797266 containerd[1963]: time="2025-01-30T13:53:43.796365167Z" level=info msg="shim disconnected" id=1fe2c9293acce427770100ec60394d923d35abb31dd021bcfbb2a554a2b1a7f7 namespace=k8s.io Jan 30 13:53:43.797266 containerd[1963]: time="2025-01-30T13:53:43.796513902Z" level=warning msg="cleaning up after shim disconnected" id=1fe2c9293acce427770100ec60394d923d35abb31dd021bcfbb2a554a2b1a7f7 namespace=k8s.io Jan 30 13:53:43.797266 containerd[1963]: time="2025-01-30T13:53:43.796529967Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:53:43.809845 containerd[1963]: time="2025-01-30T13:53:43.809784500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:43.812942 containerd[1963]: time="2025-01-30T13:53:43.810901805Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231128" Jan 30 13:53:43.812942 containerd[1963]: time="2025-01-30T13:53:43.812539378Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:43.816595 containerd[1963]: time="2025-01-30T13:53:43.816531018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:53:43.817506 containerd[1963]: time="2025-01-30T13:53:43.817470911Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 2.219165461s" Jan 30 13:53:43.817750 containerd[1963]: time="2025-01-30T13:53:43.817633272Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 30 13:53:43.828499 containerd[1963]: time="2025-01-30T13:53:43.828341464Z" level=info msg="CreateContainer within sandbox \"bfea481941cf8146707d188e30b2581323d1e23ab215ec54f39e273977822ba2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:53:43.845200 containerd[1963]: time="2025-01-30T13:53:43.845157655Z" level=info msg="CreateContainer within sandbox \"bfea481941cf8146707d188e30b2581323d1e23ab215ec54f39e273977822ba2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4894efae61f96db0b61805cf9f527189bbe16e9a6cbad13d346ba15d9214da1d\"" Jan 30 13:53:43.845892 containerd[1963]: time="2025-01-30T13:53:43.845858506Z" level=info msg="StartContainer for \"4894efae61f96db0b61805cf9f527189bbe16e9a6cbad13d346ba15d9214da1d\"" Jan 30 13:53:43.905276 systemd[1]: Started cri-containerd-4894efae61f96db0b61805cf9f527189bbe16e9a6cbad13d346ba15d9214da1d.scope - libcontainer container 4894efae61f96db0b61805cf9f527189bbe16e9a6cbad13d346ba15d9214da1d. Jan 30 13:53:43.955071 containerd[1963]: time="2025-01-30T13:53:43.952120361Z" level=info msg="StartContainer for \"4894efae61f96db0b61805cf9f527189bbe16e9a6cbad13d346ba15d9214da1d\" returns successfully" Jan 30 13:53:44.266957 kubelet[2431]: E0130 13:53:44.266922 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:44.466583 containerd[1963]: time="2025-01-30T13:53:44.466544016Z" level=info msg="CreateContainer within sandbox \"90376dbe11424bb864397489246987f8935504b37a0243fea22a8f4f2a973148\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:53:44.479883 containerd[1963]: time="2025-01-30T13:53:44.479836542Z" level=info msg="CreateContainer within sandbox \"90376dbe11424bb864397489246987f8935504b37a0243fea22a8f4f2a973148\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8b51b920c79a264fa27601d8bba193f7bd9d6296f1eabfe54eef9e94c4aacc58\"" Jan 30 13:53:44.480788 containerd[1963]: time="2025-01-30T13:53:44.480754686Z" level=info msg="StartContainer for \"8b51b920c79a264fa27601d8bba193f7bd9d6296f1eabfe54eef9e94c4aacc58\"" Jan 30 13:53:44.515665 systemd[1]: Started cri-containerd-8b51b920c79a264fa27601d8bba193f7bd9d6296f1eabfe54eef9e94c4aacc58.scope - libcontainer container 8b51b920c79a264fa27601d8bba193f7bd9d6296f1eabfe54eef9e94c4aacc58. Jan 30 13:53:44.543742 systemd[1]: cri-containerd-8b51b920c79a264fa27601d8bba193f7bd9d6296f1eabfe54eef9e94c4aacc58.scope: Deactivated successfully. Jan 30 13:53:44.546754 containerd[1963]: time="2025-01-30T13:53:44.546529618Z" level=info msg="StartContainer for \"8b51b920c79a264fa27601d8bba193f7bd9d6296f1eabfe54eef9e94c4aacc58\" returns successfully" Jan 30 13:53:44.575871 containerd[1963]: time="2025-01-30T13:53:44.575805225Z" level=info msg="shim disconnected" id=8b51b920c79a264fa27601d8bba193f7bd9d6296f1eabfe54eef9e94c4aacc58 namespace=k8s.io Jan 30 13:53:44.575871 containerd[1963]: time="2025-01-30T13:53:44.575865229Z" level=warning msg="cleaning up after shim disconnected" id=8b51b920c79a264fa27601d8bba193f7bd9d6296f1eabfe54eef9e94c4aacc58 namespace=k8s.io Jan 30 13:53:44.575871 containerd[1963]: time="2025-01-30T13:53:44.575876436Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:53:45.267314 kubelet[2431]: E0130 13:53:45.267267 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:45.486756 containerd[1963]: time="2025-01-30T13:53:45.486712013Z" level=info msg="CreateContainer within sandbox \"90376dbe11424bb864397489246987f8935504b37a0243fea22a8f4f2a973148\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:53:45.514331 containerd[1963]: time="2025-01-30T13:53:45.514291246Z" level=info msg="CreateContainer within sandbox \"90376dbe11424bb864397489246987f8935504b37a0243fea22a8f4f2a973148\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fb9aef1af4e74a3741b0a0ad050e87ecb9037f04b85e089f4c1f2279d8f7f48f\"" Jan 30 13:53:45.515036 containerd[1963]: time="2025-01-30T13:53:45.515005130Z" level=info msg="StartContainer for \"fb9aef1af4e74a3741b0a0ad050e87ecb9037f04b85e089f4c1f2279d8f7f48f\"" Jan 30 13:53:45.550198 kubelet[2431]: I0130 13:53:45.548629 2431 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8mkcj" podStartSLOduration=4.432515958 podStartE2EDuration="13.548610242s" podCreationTimestamp="2025-01-30 13:53:32 +0000 UTC" firstStartedPulling="2025-01-30 13:53:34.705263272 +0000 UTC m=+3.026177447" lastFinishedPulling="2025-01-30 13:53:43.821357548 +0000 UTC m=+12.142271731" observedRunningTime="2025-01-30 13:53:44.531315378 +0000 UTC m=+12.852229574" watchObservedRunningTime="2025-01-30 13:53:45.548610242 +0000 UTC m=+13.869524440" Jan 30 13:53:45.579200 systemd[1]: Started cri-containerd-fb9aef1af4e74a3741b0a0ad050e87ecb9037f04b85e089f4c1f2279d8f7f48f.scope - libcontainer container fb9aef1af4e74a3741b0a0ad050e87ecb9037f04b85e089f4c1f2279d8f7f48f. Jan 30 13:53:45.635600 containerd[1963]: time="2025-01-30T13:53:45.635485730Z" level=info msg="StartContainer for \"fb9aef1af4e74a3741b0a0ad050e87ecb9037f04b85e089f4c1f2279d8f7f48f\" returns successfully" Jan 30 13:53:45.681281 systemd[1]: run-containerd-runc-k8s.io-fb9aef1af4e74a3741b0a0ad050e87ecb9037f04b85e089f4c1f2279d8f7f48f-runc.ORrOtg.mount: Deactivated successfully. Jan 30 13:53:45.864779 kubelet[2431]: I0130 13:53:45.863367 2431 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 30 13:53:46.198001 kernel: Initializing XFRM netlink socket Jan 30 13:53:46.268215 kubelet[2431]: E0130 13:53:46.268158 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:46.553828 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 30 13:53:47.269008 kubelet[2431]: E0130 13:53:47.268944 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:47.925192 systemd-networkd[1801]: cilium_host: Link UP Jan 30 13:53:47.927572 systemd-networkd[1801]: cilium_net: Link UP Jan 30 13:53:47.928269 systemd-networkd[1801]: cilium_net: Gained carrier Jan 30 13:53:47.928530 (udev-worker)[3118]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:53:47.929440 systemd-networkd[1801]: cilium_host: Gained carrier Jan 30 13:53:47.935360 (udev-worker)[3119]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:53:48.108738 (udev-worker)[3138]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:53:48.122825 systemd-networkd[1801]: cilium_vxlan: Link UP Jan 30 13:53:48.122841 systemd-networkd[1801]: cilium_vxlan: Gained carrier Jan 30 13:53:48.269588 kubelet[2431]: E0130 13:53:48.269477 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:48.483098 kernel: NET: Registered PF_ALG protocol family Jan 30 13:53:48.575191 systemd-networkd[1801]: cilium_net: Gained IPv6LL Jan 30 13:53:48.579031 systemd-networkd[1801]: cilium_host: Gained IPv6LL Jan 30 13:53:49.269619 kubelet[2431]: E0130 13:53:49.269577 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:49.279886 systemd-networkd[1801]: cilium_vxlan: Gained IPv6LL Jan 30 13:53:49.592297 (udev-worker)[3139]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:53:49.594865 systemd-networkd[1801]: lxc_health: Link UP Jan 30 13:53:49.607333 systemd-networkd[1801]: lxc_health: Gained carrier Jan 30 13:53:50.270058 kubelet[2431]: E0130 13:53:50.269991 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:50.754074 kubelet[2431]: I0130 13:53:50.754006 2431 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-k6msd" podStartSLOduration=11.850901687 podStartE2EDuration="18.753965908s" podCreationTimestamp="2025-01-30 13:53:32 +0000 UTC" firstStartedPulling="2025-01-30 13:53:34.695041904 +0000 UTC m=+3.015956080" lastFinishedPulling="2025-01-30 13:53:41.598106108 +0000 UTC m=+9.919020301" observedRunningTime="2025-01-30 13:53:46.524830467 +0000 UTC m=+14.845744662" watchObservedRunningTime="2025-01-30 13:53:50.753965908 +0000 UTC m=+19.074880100" Jan 30 13:53:51.174576 systemd[1]: Created slice kubepods-besteffort-pod8c240222_e73f_4d3f_a50c_86c17102a78d.slice - libcontainer container kubepods-besteffort-pod8c240222_e73f_4d3f_a50c_86c17102a78d.slice. Jan 30 13:53:51.198173 systemd-networkd[1801]: lxc_health: Gained IPv6LL Jan 30 13:53:51.270772 kubelet[2431]: E0130 13:53:51.270727 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:51.347018 kubelet[2431]: I0130 13:53:51.346948 2431 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kswsk\" (UniqueName: \"kubernetes.io/projected/8c240222-e73f-4d3f-a50c-86c17102a78d-kube-api-access-kswsk\") pod \"nginx-deployment-8587fbcb89-mrdmr\" (UID: \"8c240222-e73f-4d3f-a50c-86c17102a78d\") " pod="default/nginx-deployment-8587fbcb89-mrdmr" Jan 30 13:53:51.781382 containerd[1963]: time="2025-01-30T13:53:51.781319286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mrdmr,Uid:8c240222-e73f-4d3f-a50c-86c17102a78d,Namespace:default,Attempt:0,}" Jan 30 13:53:51.918506 systemd-networkd[1801]: lxc8eee26c326d5: Link UP Jan 30 13:53:51.925008 kernel: eth0: renamed from tmp171f4 Jan 30 13:53:51.929361 systemd-networkd[1801]: lxc8eee26c326d5: Gained carrier Jan 30 13:53:52.259619 kubelet[2431]: E0130 13:53:52.259538 2431 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:52.272219 kubelet[2431]: E0130 13:53:52.272136 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:53.245149 systemd-networkd[1801]: lxc8eee26c326d5: Gained IPv6LL Jan 30 13:53:53.274479 kubelet[2431]: E0130 13:53:53.274423 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:54.274756 kubelet[2431]: E0130 13:53:54.274696 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:55.275658 kubelet[2431]: E0130 13:53:55.275606 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:55.833164 containerd[1963]: time="2025-01-30T13:53:55.833045815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:53:55.833164 containerd[1963]: time="2025-01-30T13:53:55.833105735Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:53:55.833164 containerd[1963]: time="2025-01-30T13:53:55.833121680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:55.833777 containerd[1963]: time="2025-01-30T13:53:55.833218739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:53:55.864250 systemd[1]: Started cri-containerd-171f4c2e1f6b66283a3614f8183439f839f3b3ccfdb4c65ca0dda3af6b68a321.scope - libcontainer container 171f4c2e1f6b66283a3614f8183439f839f3b3ccfdb4c65ca0dda3af6b68a321. Jan 30 13:53:55.914077 containerd[1963]: time="2025-01-30T13:53:55.914027984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mrdmr,Uid:8c240222-e73f-4d3f-a50c-86c17102a78d,Namespace:default,Attempt:0,} returns sandbox id \"171f4c2e1f6b66283a3614f8183439f839f3b3ccfdb4c65ca0dda3af6b68a321\"" Jan 30 13:53:55.915808 containerd[1963]: time="2025-01-30T13:53:55.915769099Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 30 13:53:56.276424 kubelet[2431]: E0130 13:53:56.276378 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:57.278996 kubelet[2431]: E0130 13:53:57.277297 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:57.868971 ntpd[1940]: Listen normally on 8 cilium_host 192.168.1.227:123 Jan 30 13:53:57.869950 ntpd[1940]: 30 Jan 13:53:57 ntpd[1940]: Listen normally on 8 cilium_host 192.168.1.227:123 Jan 30 13:53:57.869950 ntpd[1940]: 30 Jan 13:53:57 ntpd[1940]: Listen normally on 9 cilium_net [fe80::888a:12ff:fe76:4a26%3]:123 Jan 30 13:53:57.869950 ntpd[1940]: 30 Jan 13:53:57 ntpd[1940]: Listen normally on 10 cilium_host [fe80::2c79:5aff:fe17:cd48%4]:123 Jan 30 13:53:57.869950 ntpd[1940]: 30 Jan 13:53:57 ntpd[1940]: Listen normally on 11 cilium_vxlan [fe80::707d:58ff:fec0:4fbf%5]:123 Jan 30 13:53:57.869950 ntpd[1940]: 30 Jan 13:53:57 ntpd[1940]: Listen normally on 12 lxc_health [fe80::1c4d:baff:fe0a:143d%7]:123 Jan 30 13:53:57.869950 ntpd[1940]: 30 Jan 13:53:57 ntpd[1940]: Listen normally on 13 lxc8eee26c326d5 [fe80::d025:83ff:fe34:7dda%9]:123 Jan 30 13:53:57.869161 ntpd[1940]: Listen normally on 9 cilium_net [fe80::888a:12ff:fe76:4a26%3]:123 Jan 30 13:53:57.869211 ntpd[1940]: Listen normally on 10 cilium_host [fe80::2c79:5aff:fe17:cd48%4]:123 Jan 30 13:53:57.869240 ntpd[1940]: Listen normally on 11 cilium_vxlan [fe80::707d:58ff:fec0:4fbf%5]:123 Jan 30 13:53:57.869268 ntpd[1940]: Listen normally on 12 lxc_health [fe80::1c4d:baff:fe0a:143d%7]:123 Jan 30 13:53:57.869303 ntpd[1940]: Listen normally on 13 lxc8eee26c326d5 [fe80::d025:83ff:fe34:7dda%9]:123 Jan 30 13:53:58.277930 kubelet[2431]: E0130 13:53:58.277891 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:53:58.847757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2355157133.mount: Deactivated successfully. Jan 30 13:53:59.279386 kubelet[2431]: E0130 13:53:59.279040 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:00.279715 kubelet[2431]: E0130 13:54:00.279654 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:00.408686 containerd[1963]: time="2025-01-30T13:54:00.408628596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:00.410003 containerd[1963]: time="2025-01-30T13:54:00.409850108Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71015561" Jan 30 13:54:00.412925 containerd[1963]: time="2025-01-30T13:54:00.410958681Z" level=info msg="ImageCreate event name:\"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:00.415193 containerd[1963]: time="2025-01-30T13:54:00.414068784Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:00.416013 containerd[1963]: time="2025-01-30T13:54:00.415952646Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 4.500145615s" Jan 30 13:54:00.416158 containerd[1963]: time="2025-01-30T13:54:00.416018767Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 30 13:54:00.430182 containerd[1963]: time="2025-01-30T13:54:00.430005173Z" level=info msg="CreateContainer within sandbox \"171f4c2e1f6b66283a3614f8183439f839f3b3ccfdb4c65ca0dda3af6b68a321\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 30 13:54:00.451802 containerd[1963]: time="2025-01-30T13:54:00.451739116Z" level=info msg="CreateContainer within sandbox \"171f4c2e1f6b66283a3614f8183439f839f3b3ccfdb4c65ca0dda3af6b68a321\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"9b420afd6085fd3c65a2c8d7a3974109b4c322944c39880c1f1d90033099a649\"" Jan 30 13:54:00.454807 containerd[1963]: time="2025-01-30T13:54:00.454751552Z" level=info msg="StartContainer for \"9b420afd6085fd3c65a2c8d7a3974109b4c322944c39880c1f1d90033099a649\"" Jan 30 13:54:00.540564 systemd[1]: Started cri-containerd-9b420afd6085fd3c65a2c8d7a3974109b4c322944c39880c1f1d90033099a649.scope - libcontainer container 9b420afd6085fd3c65a2c8d7a3974109b4c322944c39880c1f1d90033099a649. Jan 30 13:54:00.579851 containerd[1963]: time="2025-01-30T13:54:00.579802329Z" level=info msg="StartContainer for \"9b420afd6085fd3c65a2c8d7a3974109b4c322944c39880c1f1d90033099a649\" returns successfully" Jan 30 13:54:01.280666 kubelet[2431]: E0130 13:54:01.280606 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:01.577491 kubelet[2431]: I0130 13:54:01.577178 2431 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-mrdmr" podStartSLOduration=6.06543974 podStartE2EDuration="10.577152428s" podCreationTimestamp="2025-01-30 13:53:51 +0000 UTC" firstStartedPulling="2025-01-30 13:53:55.91531326 +0000 UTC m=+24.236227450" lastFinishedPulling="2025-01-30 13:54:00.427025956 +0000 UTC m=+28.747940138" observedRunningTime="2025-01-30 13:54:01.576594323 +0000 UTC m=+29.897508518" watchObservedRunningTime="2025-01-30 13:54:01.577152428 +0000 UTC m=+29.898066624" Jan 30 13:54:01.587627 update_engine[1946]: I20250130 13:54:01.587543 1946 update_attempter.cc:509] Updating boot flags... Jan 30 13:54:01.761010 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3630) Jan 30 13:54:02.130006 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3631) Jan 30 13:54:02.287425 kubelet[2431]: E0130 13:54:02.284618 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:02.359059 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3631) Jan 30 13:54:03.285162 kubelet[2431]: E0130 13:54:03.285123 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:04.286230 kubelet[2431]: E0130 13:54:04.286180 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:05.286519 kubelet[2431]: E0130 13:54:05.286384 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:06.286879 kubelet[2431]: E0130 13:54:06.286830 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:06.793939 systemd[1]: Created slice kubepods-besteffort-podbd3a2c1e_468b_44a0_8704_89b7d2757374.slice - libcontainer container kubepods-besteffort-podbd3a2c1e_468b_44a0_8704_89b7d2757374.slice. Jan 30 13:54:06.895655 kubelet[2431]: I0130 13:54:06.895557 2431 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/bd3a2c1e-468b-44a0-8704-89b7d2757374-data\") pod \"nfs-server-provisioner-0\" (UID: \"bd3a2c1e-468b-44a0-8704-89b7d2757374\") " pod="default/nfs-server-provisioner-0" Jan 30 13:54:06.895655 kubelet[2431]: I0130 13:54:06.895614 2431 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhbjw\" (UniqueName: \"kubernetes.io/projected/bd3a2c1e-468b-44a0-8704-89b7d2757374-kube-api-access-qhbjw\") pod \"nfs-server-provisioner-0\" (UID: \"bd3a2c1e-468b-44a0-8704-89b7d2757374\") " pod="default/nfs-server-provisioner-0" Jan 30 13:54:07.098401 containerd[1963]: time="2025-01-30T13:54:07.098269047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:bd3a2c1e-468b-44a0-8704-89b7d2757374,Namespace:default,Attempt:0,}" Jan 30 13:54:07.219008 systemd-networkd[1801]: lxc182b5fb07451: Link UP Jan 30 13:54:07.233133 (udev-worker)[3888]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:54:07.236938 kernel: eth0: renamed from tmp3cf1a Jan 30 13:54:07.241103 systemd-networkd[1801]: lxc182b5fb07451: Gained carrier Jan 30 13:54:07.287120 kubelet[2431]: E0130 13:54:07.287082 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:07.454533 containerd[1963]: time="2025-01-30T13:54:07.454327101Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:54:07.454533 containerd[1963]: time="2025-01-30T13:54:07.454380550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:54:07.454533 containerd[1963]: time="2025-01-30T13:54:07.454395581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:07.454533 containerd[1963]: time="2025-01-30T13:54:07.454482309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:07.485537 systemd[1]: Started cri-containerd-3cf1ac8356d1b4a6f918892f0e7a0c1a35bafe849e0d1855c3e664b756883c75.scope - libcontainer container 3cf1ac8356d1b4a6f918892f0e7a0c1a35bafe849e0d1855c3e664b756883c75. Jan 30 13:54:07.561351 containerd[1963]: time="2025-01-30T13:54:07.561314805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:bd3a2c1e-468b-44a0-8704-89b7d2757374,Namespace:default,Attempt:0,} returns sandbox id \"3cf1ac8356d1b4a6f918892f0e7a0c1a35bafe849e0d1855c3e664b756883c75\"" Jan 30 13:54:07.568041 containerd[1963]: time="2025-01-30T13:54:07.567822182Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 30 13:54:08.288233 kubelet[2431]: E0130 13:54:08.287338 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:08.288880 systemd-networkd[1801]: lxc182b5fb07451: Gained IPv6LL Jan 30 13:54:09.288753 kubelet[2431]: E0130 13:54:09.288708 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:10.291352 kubelet[2431]: E0130 13:54:10.290932 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:10.343756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3415979256.mount: Deactivated successfully. Jan 30 13:54:10.868803 ntpd[1940]: Listen normally on 14 lxc182b5fb07451 [fe80::883e:28ff:fe0e:2750%11]:123 Jan 30 13:54:10.869530 ntpd[1940]: 30 Jan 13:54:10 ntpd[1940]: Listen normally on 14 lxc182b5fb07451 [fe80::883e:28ff:fe0e:2750%11]:123 Jan 30 13:54:11.292887 kubelet[2431]: E0130 13:54:11.292599 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:12.258956 kubelet[2431]: E0130 13:54:12.258907 2431 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:12.293730 kubelet[2431]: E0130 13:54:12.293675 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:12.790481 containerd[1963]: time="2025-01-30T13:54:12.790429195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:12.791806 containerd[1963]: time="2025-01-30T13:54:12.791664082Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 30 13:54:12.793122 containerd[1963]: time="2025-01-30T13:54:12.792769066Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:12.795308 containerd[1963]: time="2025-01-30T13:54:12.795271574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:12.796510 containerd[1963]: time="2025-01-30T13:54:12.796392015Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 5.228519642s" Jan 30 13:54:12.796639 containerd[1963]: time="2025-01-30T13:54:12.796615777Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 30 13:54:12.799198 containerd[1963]: time="2025-01-30T13:54:12.799167011Z" level=info msg="CreateContainer within sandbox \"3cf1ac8356d1b4a6f918892f0e7a0c1a35bafe849e0d1855c3e664b756883c75\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 30 13:54:12.814831 containerd[1963]: time="2025-01-30T13:54:12.814788689Z" level=info msg="CreateContainer within sandbox \"3cf1ac8356d1b4a6f918892f0e7a0c1a35bafe849e0d1855c3e664b756883c75\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"703b36f4cd19f39e9d3eaa495f5d84cfd6c4badb34ad16b2fe54a43c812db4a0\"" Jan 30 13:54:12.815579 containerd[1963]: time="2025-01-30T13:54:12.815544877Z" level=info msg="StartContainer for \"703b36f4cd19f39e9d3eaa495f5d84cfd6c4badb34ad16b2fe54a43c812db4a0\"" Jan 30 13:54:12.855205 systemd[1]: Started cri-containerd-703b36f4cd19f39e9d3eaa495f5d84cfd6c4badb34ad16b2fe54a43c812db4a0.scope - libcontainer container 703b36f4cd19f39e9d3eaa495f5d84cfd6c4badb34ad16b2fe54a43c812db4a0. Jan 30 13:54:12.886585 containerd[1963]: time="2025-01-30T13:54:12.886525354Z" level=info msg="StartContainer for \"703b36f4cd19f39e9d3eaa495f5d84cfd6c4badb34ad16b2fe54a43c812db4a0\" returns successfully" Jan 30 13:54:13.294399 kubelet[2431]: E0130 13:54:13.294338 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:13.651693 kubelet[2431]: I0130 13:54:13.651539 2431 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.421222214 podStartE2EDuration="7.6515227s" podCreationTimestamp="2025-01-30 13:54:06 +0000 UTC" firstStartedPulling="2025-01-30 13:54:07.567283207 +0000 UTC m=+35.888197396" lastFinishedPulling="2025-01-30 13:54:12.797583706 +0000 UTC m=+41.118497882" observedRunningTime="2025-01-30 13:54:13.651379295 +0000 UTC m=+41.972293491" watchObservedRunningTime="2025-01-30 13:54:13.6515227 +0000 UTC m=+41.972436898" Jan 30 13:54:14.294547 kubelet[2431]: E0130 13:54:14.294492 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:15.295617 kubelet[2431]: E0130 13:54:15.295557 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:16.296741 kubelet[2431]: E0130 13:54:16.296684 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:17.297599 kubelet[2431]: E0130 13:54:17.297542 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:18.298195 kubelet[2431]: E0130 13:54:18.298147 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:19.298603 kubelet[2431]: E0130 13:54:19.298545 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:20.299746 kubelet[2431]: E0130 13:54:20.299694 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:21.300266 kubelet[2431]: E0130 13:54:21.300214 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:22.300737 kubelet[2431]: E0130 13:54:22.300680 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:22.708487 systemd[1]: Created slice kubepods-besteffort-pod18803238_b65f_43ed_9614_9ac022c59107.slice - libcontainer container kubepods-besteffort-pod18803238_b65f_43ed_9614_9ac022c59107.slice. Jan 30 13:54:22.910359 kubelet[2431]: I0130 13:54:22.910249 2431 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8ffebedb-82e2-4a26-9f6e-08ce5caa7313\" (UniqueName: \"kubernetes.io/nfs/18803238-b65f-43ed-9614-9ac022c59107-pvc-8ffebedb-82e2-4a26-9f6e-08ce5caa7313\") pod \"test-pod-1\" (UID: \"18803238-b65f-43ed-9614-9ac022c59107\") " pod="default/test-pod-1" Jan 30 13:54:22.910359 kubelet[2431]: I0130 13:54:22.910300 2431 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpgrw\" (UniqueName: \"kubernetes.io/projected/18803238-b65f-43ed-9614-9ac022c59107-kube-api-access-gpgrw\") pod \"test-pod-1\" (UID: \"18803238-b65f-43ed-9614-9ac022c59107\") " pod="default/test-pod-1" Jan 30 13:54:23.060119 kernel: FS-Cache: Loaded Jan 30 13:54:23.137455 kernel: RPC: Registered named UNIX socket transport module. Jan 30 13:54:23.137653 kernel: RPC: Registered udp transport module. Jan 30 13:54:23.137689 kernel: RPC: Registered tcp transport module. Jan 30 13:54:23.138334 kernel: RPC: Registered tcp-with-tls transport module. Jan 30 13:54:23.138391 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 30 13:54:23.302612 kubelet[2431]: E0130 13:54:23.301682 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:23.468348 kernel: NFS: Registering the id_resolver key type Jan 30 13:54:23.468469 kernel: Key type id_resolver registered Jan 30 13:54:23.468495 kernel: Key type id_legacy registered Jan 30 13:54:23.511039 nfsidmap[4075]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 30 13:54:23.515433 nfsidmap[4076]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 30 13:54:23.611696 containerd[1963]: time="2025-01-30T13:54:23.611654650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:18803238-b65f-43ed-9614-9ac022c59107,Namespace:default,Attempt:0,}" Jan 30 13:54:23.700145 systemd-networkd[1801]: lxca29eaa94bfba: Link UP Jan 30 13:54:23.707311 (udev-worker)[4072]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:54:23.709404 kernel: eth0: renamed from tmpf913d Jan 30 13:54:23.716053 systemd-networkd[1801]: lxca29eaa94bfba: Gained carrier Jan 30 13:54:24.035009 containerd[1963]: time="2025-01-30T13:54:24.034879430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:54:24.036232 containerd[1963]: time="2025-01-30T13:54:24.035485669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:54:24.036232 containerd[1963]: time="2025-01-30T13:54:24.036066308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:24.036232 containerd[1963]: time="2025-01-30T13:54:24.036196271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:24.084201 systemd[1]: run-containerd-runc-k8s.io-f913dd5051eb36dd093b36af2353900eba37d4ec4e787006eaa632eff0f883eb-runc.xxsYT3.mount: Deactivated successfully. Jan 30 13:54:24.096255 systemd[1]: Started cri-containerd-f913dd5051eb36dd093b36af2353900eba37d4ec4e787006eaa632eff0f883eb.scope - libcontainer container f913dd5051eb36dd093b36af2353900eba37d4ec4e787006eaa632eff0f883eb. Jan 30 13:54:24.161816 containerd[1963]: time="2025-01-30T13:54:24.161768729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:18803238-b65f-43ed-9614-9ac022c59107,Namespace:default,Attempt:0,} returns sandbox id \"f913dd5051eb36dd093b36af2353900eba37d4ec4e787006eaa632eff0f883eb\"" Jan 30 13:54:24.163750 containerd[1963]: time="2025-01-30T13:54:24.163711847Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 30 13:54:24.302282 kubelet[2431]: E0130 13:54:24.302164 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:24.489178 containerd[1963]: time="2025-01-30T13:54:24.489129091Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:24.490997 containerd[1963]: time="2025-01-30T13:54:24.490910288Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 30 13:54:24.493795 containerd[1963]: time="2025-01-30T13:54:24.493746469Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 329.997897ms" Jan 30 13:54:24.493795 containerd[1963]: time="2025-01-30T13:54:24.493784112Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 30 13:54:24.496205 containerd[1963]: time="2025-01-30T13:54:24.496163504Z" level=info msg="CreateContainer within sandbox \"f913dd5051eb36dd093b36af2353900eba37d4ec4e787006eaa632eff0f883eb\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 30 13:54:24.521552 containerd[1963]: time="2025-01-30T13:54:24.521497692Z" level=info msg="CreateContainer within sandbox \"f913dd5051eb36dd093b36af2353900eba37d4ec4e787006eaa632eff0f883eb\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"635301140dfc721ee41511ff1884639fb3e2a634a9f35a88c32d035bb164d540\"" Jan 30 13:54:24.522363 containerd[1963]: time="2025-01-30T13:54:24.522320373Z" level=info msg="StartContainer for \"635301140dfc721ee41511ff1884639fb3e2a634a9f35a88c32d035bb164d540\"" Jan 30 13:54:24.557276 systemd[1]: Started cri-containerd-635301140dfc721ee41511ff1884639fb3e2a634a9f35a88c32d035bb164d540.scope - libcontainer container 635301140dfc721ee41511ff1884639fb3e2a634a9f35a88c32d035bb164d540. Jan 30 13:54:24.590165 containerd[1963]: time="2025-01-30T13:54:24.590107550Z" level=info msg="StartContainer for \"635301140dfc721ee41511ff1884639fb3e2a634a9f35a88c32d035bb164d540\" returns successfully" Jan 30 13:54:24.729084 kubelet[2431]: I0130 13:54:24.729034 2431 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.397759063 podStartE2EDuration="17.729019096s" podCreationTimestamp="2025-01-30 13:54:07 +0000 UTC" firstStartedPulling="2025-01-30 13:54:24.163355921 +0000 UTC m=+52.484270110" lastFinishedPulling="2025-01-30 13:54:24.494615968 +0000 UTC m=+52.815530143" observedRunningTime="2025-01-30 13:54:24.728688072 +0000 UTC m=+53.049602271" watchObservedRunningTime="2025-01-30 13:54:24.729019096 +0000 UTC m=+53.049933273" Jan 30 13:54:24.861577 systemd-networkd[1801]: lxca29eaa94bfba: Gained IPv6LL Jan 30 13:54:25.302676 kubelet[2431]: E0130 13:54:25.302620 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:26.302885 kubelet[2431]: E0130 13:54:26.302825 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:26.868899 ntpd[1940]: Listen normally on 15 lxca29eaa94bfba [fe80::dc2e:39ff:fe2b:a410%13]:123 Jan 30 13:54:26.869360 ntpd[1940]: 30 Jan 13:54:26 ntpd[1940]: Listen normally on 15 lxca29eaa94bfba [fe80::dc2e:39ff:fe2b:a410%13]:123 Jan 30 13:54:27.303391 kubelet[2431]: E0130 13:54:27.303334 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:28.304033 kubelet[2431]: E0130 13:54:28.303963 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:29.304408 kubelet[2431]: E0130 13:54:29.304348 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:30.304569 kubelet[2431]: E0130 13:54:30.304504 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:31.305268 kubelet[2431]: E0130 13:54:31.305223 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:32.198453 containerd[1963]: time="2025-01-30T13:54:32.198387289Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:54:32.258770 kubelet[2431]: E0130 13:54:32.258722 2431 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:32.307036 kubelet[2431]: E0130 13:54:32.306162 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:32.323929 containerd[1963]: time="2025-01-30T13:54:32.323866307Z" level=info msg="StopContainer for \"fb9aef1af4e74a3741b0a0ad050e87ecb9037f04b85e089f4c1f2279d8f7f48f\" with timeout 2 (s)" Jan 30 13:54:32.324308 containerd[1963]: time="2025-01-30T13:54:32.324274898Z" level=info msg="Stop container \"fb9aef1af4e74a3741b0a0ad050e87ecb9037f04b85e089f4c1f2279d8f7f48f\" with signal terminated" Jan 30 13:54:32.334411 systemd-networkd[1801]: lxc_health: Link DOWN Jan 30 13:54:32.334420 systemd-networkd[1801]: lxc_health: Lost carrier Jan 30 13:54:32.376988 systemd[1]: cri-containerd-fb9aef1af4e74a3741b0a0ad050e87ecb9037f04b85e089f4c1f2279d8f7f48f.scope: Deactivated successfully. Jan 30 13:54:32.377277 systemd[1]: cri-containerd-fb9aef1af4e74a3741b0a0ad050e87ecb9037f04b85e089f4c1f2279d8f7f48f.scope: Consumed 7.750s CPU time. Jan 30 13:54:32.390189 kubelet[2431]: E0130 13:54:32.389086 2431 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 13:54:32.420364 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb9aef1af4e74a3741b0a0ad050e87ecb9037f04b85e089f4c1f2279d8f7f48f-rootfs.mount: Deactivated successfully. Jan 30 13:54:32.446861 containerd[1963]: time="2025-01-30T13:54:32.446746309Z" level=info msg="shim disconnected" id=fb9aef1af4e74a3741b0a0ad050e87ecb9037f04b85e089f4c1f2279d8f7f48f namespace=k8s.io Jan 30 13:54:32.446861 containerd[1963]: time="2025-01-30T13:54:32.446861216Z" level=warning msg="cleaning up after shim disconnected" id=fb9aef1af4e74a3741b0a0ad050e87ecb9037f04b85e089f4c1f2279d8f7f48f namespace=k8s.io Jan 30 13:54:32.447280 containerd[1963]: time="2025-01-30T13:54:32.446874329Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:54:32.466779 containerd[1963]: time="2025-01-30T13:54:32.466670638Z" level=info msg="StopContainer for \"fb9aef1af4e74a3741b0a0ad050e87ecb9037f04b85e089f4c1f2279d8f7f48f\" returns successfully" Jan 30 13:54:32.481735 containerd[1963]: time="2025-01-30T13:54:32.481689122Z" level=info msg="StopPodSandbox for \"90376dbe11424bb864397489246987f8935504b37a0243fea22a8f4f2a973148\"" Jan 30 13:54:32.482854 containerd[1963]: time="2025-01-30T13:54:32.481773027Z" level=info msg="Container to stop \"1fe2c9293acce427770100ec60394d923d35abb31dd021bcfbb2a554a2b1a7f7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:54:32.482854 containerd[1963]: time="2025-01-30T13:54:32.481791854Z" level=info msg="Container to stop \"8b51b920c79a264fa27601d8bba193f7bd9d6296f1eabfe54eef9e94c4aacc58\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:54:32.482854 containerd[1963]: time="2025-01-30T13:54:32.481810035Z" level=info msg="Container to stop \"18bc5a0f85c1c187cc311c3c42eb5e247daa7ab720d9d231b2846fd20914f319\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:54:32.482854 containerd[1963]: time="2025-01-30T13:54:32.481825060Z" level=info msg="Container to stop \"0210b39c53140e90758d5b291eb0f7fdec7037fe5d36c3c8e017222b8bc28542\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:54:32.482854 containerd[1963]: time="2025-01-30T13:54:32.481869247Z" level=info msg="Container to stop \"fb9aef1af4e74a3741b0a0ad050e87ecb9037f04b85e089f4c1f2279d8f7f48f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:54:32.486643 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-90376dbe11424bb864397489246987f8935504b37a0243fea22a8f4f2a973148-shm.mount: Deactivated successfully. Jan 30 13:54:32.495456 systemd[1]: cri-containerd-90376dbe11424bb864397489246987f8935504b37a0243fea22a8f4f2a973148.scope: Deactivated successfully. Jan 30 13:54:32.535592 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90376dbe11424bb864397489246987f8935504b37a0243fea22a8f4f2a973148-rootfs.mount: Deactivated successfully. Jan 30 13:54:32.545905 containerd[1963]: time="2025-01-30T13:54:32.545762798Z" level=info msg="shim disconnected" id=90376dbe11424bb864397489246987f8935504b37a0243fea22a8f4f2a973148 namespace=k8s.io Jan 30 13:54:32.545905 containerd[1963]: time="2025-01-30T13:54:32.545829210Z" level=warning msg="cleaning up after shim disconnected" id=90376dbe11424bb864397489246987f8935504b37a0243fea22a8f4f2a973148 namespace=k8s.io Jan 30 13:54:32.545905 containerd[1963]: time="2025-01-30T13:54:32.545842367Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:54:32.564871 containerd[1963]: time="2025-01-30T13:54:32.564822896Z" level=info msg="TearDown network for sandbox \"90376dbe11424bb864397489246987f8935504b37a0243fea22a8f4f2a973148\" successfully" Jan 30 13:54:32.564871 containerd[1963]: time="2025-01-30T13:54:32.564855219Z" level=info msg="StopPodSandbox for \"90376dbe11424bb864397489246987f8935504b37a0243fea22a8f4f2a973148\" returns successfully" Jan 30 13:54:32.675486 kubelet[2431]: I0130 13:54:32.675139 2431 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c768a57c-db80-4717-b3a5-af2e7334b2bd-hubble-tls\") pod \"c768a57c-db80-4717-b3a5-af2e7334b2bd\" (UID: \"c768a57c-db80-4717-b3a5-af2e7334b2bd\") " Jan 30 13:54:32.675486 kubelet[2431]: I0130 13:54:32.675194 2431 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgvv7\" (UniqueName: \"kubernetes.io/projected/c768a57c-db80-4717-b3a5-af2e7334b2bd-kube-api-access-lgvv7\") pod \"c768a57c-db80-4717-b3a5-af2e7334b2bd\" (UID: \"c768a57c-db80-4717-b3a5-af2e7334b2bd\") " Jan 30 13:54:32.675486 kubelet[2431]: I0130 13:54:32.675224 2431 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c768a57c-db80-4717-b3a5-af2e7334b2bd-host-proc-sys-kernel\") pod \"c768a57c-db80-4717-b3a5-af2e7334b2bd\" (UID: \"c768a57c-db80-4717-b3a5-af2e7334b2bd\") " Jan 30 13:54:32.675486 kubelet[2431]: I0130 13:54:32.675246 2431 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c768a57c-db80-4717-b3a5-af2e7334b2bd-bpf-maps\") pod \"c768a57c-db80-4717-b3a5-af2e7334b2bd\" (UID: \"c768a57c-db80-4717-b3a5-af2e7334b2bd\") " Jan 30 13:54:32.675486 kubelet[2431]: I0130 13:54:32.675269 2431 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c768a57c-db80-4717-b3a5-af2e7334b2bd-etc-cni-netd\") pod \"c768a57c-db80-4717-b3a5-af2e7334b2bd\" (UID: \"c768a57c-db80-4717-b3a5-af2e7334b2bd\") " Jan 30 13:54:32.675486 kubelet[2431]: I0130 13:54:32.675289 2431 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c768a57c-db80-4717-b3a5-af2e7334b2bd-cilium-cgroup\") pod \"c768a57c-db80-4717-b3a5-af2e7334b2bd\" (UID: \"c768a57c-db80-4717-b3a5-af2e7334b2bd\") " Jan 30 13:54:32.675851 kubelet[2431]: I0130 13:54:32.675309 2431 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c768a57c-db80-4717-b3a5-af2e7334b2bd-xtables-lock\") pod \"c768a57c-db80-4717-b3a5-af2e7334b2bd\" (UID: \"c768a57c-db80-4717-b3a5-af2e7334b2bd\") " Jan 30 13:54:32.675851 kubelet[2431]: I0130 13:54:32.675331 2431 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c768a57c-db80-4717-b3a5-af2e7334b2bd-host-proc-sys-net\") pod \"c768a57c-db80-4717-b3a5-af2e7334b2bd\" (UID: \"c768a57c-db80-4717-b3a5-af2e7334b2bd\") " Jan 30 13:54:32.675851 kubelet[2431]: I0130 13:54:32.675353 2431 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c768a57c-db80-4717-b3a5-af2e7334b2bd-cilium-run\") pod \"c768a57c-db80-4717-b3a5-af2e7334b2bd\" (UID: \"c768a57c-db80-4717-b3a5-af2e7334b2bd\") " Jan 30 13:54:32.675851 kubelet[2431]: I0130 13:54:32.675375 2431 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c768a57c-db80-4717-b3a5-af2e7334b2bd-lib-modules\") pod \"c768a57c-db80-4717-b3a5-af2e7334b2bd\" (UID: \"c768a57c-db80-4717-b3a5-af2e7334b2bd\") " Jan 30 13:54:32.675851 kubelet[2431]: I0130 13:54:32.675399 2431 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c768a57c-db80-4717-b3a5-af2e7334b2bd-clustermesh-secrets\") pod \"c768a57c-db80-4717-b3a5-af2e7334b2bd\" (UID: \"c768a57c-db80-4717-b3a5-af2e7334b2bd\") " Jan 30 13:54:32.675851 kubelet[2431]: I0130 13:54:32.675419 2431 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c768a57c-db80-4717-b3a5-af2e7334b2bd-cni-path\") pod \"c768a57c-db80-4717-b3a5-af2e7334b2bd\" (UID: \"c768a57c-db80-4717-b3a5-af2e7334b2bd\") " Jan 30 13:54:32.676180 kubelet[2431]: I0130 13:54:32.675437 2431 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c768a57c-db80-4717-b3a5-af2e7334b2bd-hostproc\") pod \"c768a57c-db80-4717-b3a5-af2e7334b2bd\" (UID: \"c768a57c-db80-4717-b3a5-af2e7334b2bd\") " Jan 30 13:54:32.676180 kubelet[2431]: I0130 13:54:32.675461 2431 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c768a57c-db80-4717-b3a5-af2e7334b2bd-cilium-config-path\") pod \"c768a57c-db80-4717-b3a5-af2e7334b2bd\" (UID: \"c768a57c-db80-4717-b3a5-af2e7334b2bd\") " Jan 30 13:54:32.682194 kubelet[2431]: I0130 13:54:32.682120 2431 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c768a57c-db80-4717-b3a5-af2e7334b2bd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c768a57c-db80-4717-b3a5-af2e7334b2bd" (UID: "c768a57c-db80-4717-b3a5-af2e7334b2bd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:54:32.686680 kubelet[2431]: I0130 13:54:32.686274 2431 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c768a57c-db80-4717-b3a5-af2e7334b2bd-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c768a57c-db80-4717-b3a5-af2e7334b2bd" (UID: "c768a57c-db80-4717-b3a5-af2e7334b2bd"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:54:32.686680 kubelet[2431]: I0130 13:54:32.686321 2431 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c768a57c-db80-4717-b3a5-af2e7334b2bd-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c768a57c-db80-4717-b3a5-af2e7334b2bd" (UID: "c768a57c-db80-4717-b3a5-af2e7334b2bd"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:54:32.686680 kubelet[2431]: I0130 13:54:32.686339 2431 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c768a57c-db80-4717-b3a5-af2e7334b2bd-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c768a57c-db80-4717-b3a5-af2e7334b2bd" (UID: "c768a57c-db80-4717-b3a5-af2e7334b2bd"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:54:32.686680 kubelet[2431]: I0130 13:54:32.686357 2431 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c768a57c-db80-4717-b3a5-af2e7334b2bd-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c768a57c-db80-4717-b3a5-af2e7334b2bd" (UID: "c768a57c-db80-4717-b3a5-af2e7334b2bd"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:54:32.686680 kubelet[2431]: I0130 13:54:32.686374 2431 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c768a57c-db80-4717-b3a5-af2e7334b2bd-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c768a57c-db80-4717-b3a5-af2e7334b2bd" (UID: "c768a57c-db80-4717-b3a5-af2e7334b2bd"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:54:32.687004 kubelet[2431]: I0130 13:54:32.686394 2431 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c768a57c-db80-4717-b3a5-af2e7334b2bd-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c768a57c-db80-4717-b3a5-af2e7334b2bd" (UID: "c768a57c-db80-4717-b3a5-af2e7334b2bd"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:54:32.687004 kubelet[2431]: I0130 13:54:32.686413 2431 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c768a57c-db80-4717-b3a5-af2e7334b2bd-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c768a57c-db80-4717-b3a5-af2e7334b2bd" (UID: "c768a57c-db80-4717-b3a5-af2e7334b2bd"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:54:32.687004 kubelet[2431]: I0130 13:54:32.686432 2431 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c768a57c-db80-4717-b3a5-af2e7334b2bd-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c768a57c-db80-4717-b3a5-af2e7334b2bd" (UID: "c768a57c-db80-4717-b3a5-af2e7334b2bd"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:54:32.690155 kubelet[2431]: I0130 13:54:32.689188 2431 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c768a57c-db80-4717-b3a5-af2e7334b2bd-kube-api-access-lgvv7" (OuterVolumeSpecName: "kube-api-access-lgvv7") pod "c768a57c-db80-4717-b3a5-af2e7334b2bd" (UID: "c768a57c-db80-4717-b3a5-af2e7334b2bd"). InnerVolumeSpecName "kube-api-access-lgvv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:54:32.689237 systemd[1]: var-lib-kubelet-pods-c768a57c\x2ddb80\x2d4717\x2db3a5\x2daf2e7334b2bd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlgvv7.mount: Deactivated successfully. Jan 30 13:54:32.692141 kubelet[2431]: I0130 13:54:32.691098 2431 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c768a57c-db80-4717-b3a5-af2e7334b2bd-cni-path" (OuterVolumeSpecName: "cni-path") pod "c768a57c-db80-4717-b3a5-af2e7334b2bd" (UID: "c768a57c-db80-4717-b3a5-af2e7334b2bd"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:54:32.692141 kubelet[2431]: I0130 13:54:32.691246 2431 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c768a57c-db80-4717-b3a5-af2e7334b2bd-hostproc" (OuterVolumeSpecName: "hostproc") pod "c768a57c-db80-4717-b3a5-af2e7334b2bd" (UID: "c768a57c-db80-4717-b3a5-af2e7334b2bd"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:54:32.696400 kubelet[2431]: I0130 13:54:32.696361 2431 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c768a57c-db80-4717-b3a5-af2e7334b2bd-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c768a57c-db80-4717-b3a5-af2e7334b2bd" (UID: "c768a57c-db80-4717-b3a5-af2e7334b2bd"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:54:32.702745 kubelet[2431]: I0130 13:54:32.702706 2431 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c768a57c-db80-4717-b3a5-af2e7334b2bd-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c768a57c-db80-4717-b3a5-af2e7334b2bd" (UID: "c768a57c-db80-4717-b3a5-af2e7334b2bd"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:54:32.725542 systemd[1]: Removed slice kubepods-burstable-podc768a57c_db80_4717_b3a5_af2e7334b2bd.slice - libcontainer container kubepods-burstable-podc768a57c_db80_4717_b3a5_af2e7334b2bd.slice. Jan 30 13:54:32.728039 systemd[1]: kubepods-burstable-podc768a57c_db80_4717_b3a5_af2e7334b2bd.slice: Consumed 7.846s CPU time. Jan 30 13:54:32.733567 kubelet[2431]: I0130 13:54:32.733525 2431 scope.go:117] "RemoveContainer" containerID="fb9aef1af4e74a3741b0a0ad050e87ecb9037f04b85e089f4c1f2279d8f7f48f" Jan 30 13:54:32.736581 containerd[1963]: time="2025-01-30T13:54:32.736547171Z" level=info msg="RemoveContainer for \"fb9aef1af4e74a3741b0a0ad050e87ecb9037f04b85e089f4c1f2279d8f7f48f\"" Jan 30 13:54:32.745729 containerd[1963]: time="2025-01-30T13:54:32.745680042Z" level=info msg="RemoveContainer for \"fb9aef1af4e74a3741b0a0ad050e87ecb9037f04b85e089f4c1f2279d8f7f48f\" returns successfully" Jan 30 13:54:32.746346 kubelet[2431]: I0130 13:54:32.745962 2431 scope.go:117] "RemoveContainer" containerID="8b51b920c79a264fa27601d8bba193f7bd9d6296f1eabfe54eef9e94c4aacc58" Jan 30 13:54:32.748010 containerd[1963]: time="2025-01-30T13:54:32.747501814Z" level=info msg="RemoveContainer for \"8b51b920c79a264fa27601d8bba193f7bd9d6296f1eabfe54eef9e94c4aacc58\"" Jan 30 13:54:32.752790 containerd[1963]: time="2025-01-30T13:54:32.752678150Z" level=info msg="RemoveContainer for \"8b51b920c79a264fa27601d8bba193f7bd9d6296f1eabfe54eef9e94c4aacc58\" returns successfully" Jan 30 13:54:32.753434 kubelet[2431]: I0130 13:54:32.753043 2431 scope.go:117] "RemoveContainer" containerID="1fe2c9293acce427770100ec60394d923d35abb31dd021bcfbb2a554a2b1a7f7" Jan 30 13:54:32.754796 containerd[1963]: time="2025-01-30T13:54:32.754759226Z" level=info msg="RemoveContainer for \"1fe2c9293acce427770100ec60394d923d35abb31dd021bcfbb2a554a2b1a7f7\"" Jan 30 13:54:32.760694 containerd[1963]: time="2025-01-30T13:54:32.760657291Z" level=info msg="RemoveContainer for \"1fe2c9293acce427770100ec60394d923d35abb31dd021bcfbb2a554a2b1a7f7\" returns successfully" Jan 30 13:54:32.760862 kubelet[2431]: I0130 13:54:32.760838 2431 scope.go:117] "RemoveContainer" containerID="0210b39c53140e90758d5b291eb0f7fdec7037fe5d36c3c8e017222b8bc28542" Jan 30 13:54:32.762405 containerd[1963]: time="2025-01-30T13:54:32.762174141Z" level=info msg="RemoveContainer for \"0210b39c53140e90758d5b291eb0f7fdec7037fe5d36c3c8e017222b8bc28542\"" Jan 30 13:54:32.768656 containerd[1963]: time="2025-01-30T13:54:32.768610581Z" level=info msg="RemoveContainer for \"0210b39c53140e90758d5b291eb0f7fdec7037fe5d36c3c8e017222b8bc28542\" returns successfully" Jan 30 13:54:32.768924 kubelet[2431]: I0130 13:54:32.768900 2431 scope.go:117] "RemoveContainer" containerID="18bc5a0f85c1c187cc311c3c42eb5e247daa7ab720d9d231b2846fd20914f319" Jan 30 13:54:32.770557 containerd[1963]: time="2025-01-30T13:54:32.770293554Z" level=info msg="RemoveContainer for \"18bc5a0f85c1c187cc311c3c42eb5e247daa7ab720d9d231b2846fd20914f319\"" Jan 30 13:54:32.775948 kubelet[2431]: I0130 13:54:32.775906 2431 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c768a57c-db80-4717-b3a5-af2e7334b2bd-cilium-run\") on node \"172.31.27.35\" DevicePath \"\"" Jan 30 13:54:32.775948 kubelet[2431]: I0130 13:54:32.775936 2431 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c768a57c-db80-4717-b3a5-af2e7334b2bd-lib-modules\") on node \"172.31.27.35\" DevicePath \"\"" Jan 30 13:54:32.775948 kubelet[2431]: I0130 13:54:32.775949 2431 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c768a57c-db80-4717-b3a5-af2e7334b2bd-clustermesh-secrets\") on node \"172.31.27.35\" DevicePath \"\"" Jan 30 13:54:32.776163 kubelet[2431]: I0130 13:54:32.775988 2431 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c768a57c-db80-4717-b3a5-af2e7334b2bd-cni-path\") on node \"172.31.27.35\" DevicePath \"\"" Jan 30 13:54:32.776163 kubelet[2431]: I0130 13:54:32.775999 2431 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c768a57c-db80-4717-b3a5-af2e7334b2bd-hostproc\") on node \"172.31.27.35\" DevicePath \"\"" Jan 30 13:54:32.776163 kubelet[2431]: I0130 13:54:32.776009 2431 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c768a57c-db80-4717-b3a5-af2e7334b2bd-cilium-config-path\") on node \"172.31.27.35\" DevicePath \"\"" Jan 30 13:54:32.776163 kubelet[2431]: I0130 13:54:32.776020 2431 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c768a57c-db80-4717-b3a5-af2e7334b2bd-etc-cni-netd\") on node \"172.31.27.35\" DevicePath \"\"" Jan 30 13:54:32.776163 kubelet[2431]: I0130 13:54:32.776030 2431 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c768a57c-db80-4717-b3a5-af2e7334b2bd-hubble-tls\") on node \"172.31.27.35\" DevicePath \"\"" Jan 30 13:54:32.776163 kubelet[2431]: I0130 13:54:32.776041 2431 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-lgvv7\" (UniqueName: \"kubernetes.io/projected/c768a57c-db80-4717-b3a5-af2e7334b2bd-kube-api-access-lgvv7\") on node \"172.31.27.35\" DevicePath \"\"" Jan 30 13:54:32.776163 kubelet[2431]: I0130 13:54:32.776053 2431 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c768a57c-db80-4717-b3a5-af2e7334b2bd-host-proc-sys-kernel\") on node \"172.31.27.35\" DevicePath \"\"" Jan 30 13:54:32.776163 kubelet[2431]: I0130 13:54:32.776067 2431 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c768a57c-db80-4717-b3a5-af2e7334b2bd-bpf-maps\") on node \"172.31.27.35\" DevicePath \"\"" Jan 30 13:54:32.776517 kubelet[2431]: I0130 13:54:32.776077 2431 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c768a57c-db80-4717-b3a5-af2e7334b2bd-cilium-cgroup\") on node \"172.31.27.35\" DevicePath \"\"" Jan 30 13:54:32.776517 kubelet[2431]: I0130 13:54:32.776089 2431 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c768a57c-db80-4717-b3a5-af2e7334b2bd-xtables-lock\") on node \"172.31.27.35\" DevicePath \"\"" Jan 30 13:54:32.776517 kubelet[2431]: I0130 13:54:32.776102 2431 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c768a57c-db80-4717-b3a5-af2e7334b2bd-host-proc-sys-net\") on node \"172.31.27.35\" DevicePath \"\"" Jan 30 13:54:32.779792 containerd[1963]: time="2025-01-30T13:54:32.779749926Z" level=info msg="RemoveContainer for \"18bc5a0f85c1c187cc311c3c42eb5e247daa7ab720d9d231b2846fd20914f319\" returns successfully" Jan 30 13:54:32.780132 kubelet[2431]: I0130 13:54:32.780107 2431 scope.go:117] "RemoveContainer" containerID="fb9aef1af4e74a3741b0a0ad050e87ecb9037f04b85e089f4c1f2279d8f7f48f" Jan 30 13:54:32.780431 containerd[1963]: time="2025-01-30T13:54:32.780386669Z" level=error msg="ContainerStatus for \"fb9aef1af4e74a3741b0a0ad050e87ecb9037f04b85e089f4c1f2279d8f7f48f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fb9aef1af4e74a3741b0a0ad050e87ecb9037f04b85e089f4c1f2279d8f7f48f\": not found" Jan 30 13:54:32.794963 kubelet[2431]: E0130 13:54:32.794885 2431 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fb9aef1af4e74a3741b0a0ad050e87ecb9037f04b85e089f4c1f2279d8f7f48f\": not found" containerID="fb9aef1af4e74a3741b0a0ad050e87ecb9037f04b85e089f4c1f2279d8f7f48f" Jan 30 13:54:32.795177 kubelet[2431]: I0130 13:54:32.794991 2431 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fb9aef1af4e74a3741b0a0ad050e87ecb9037f04b85e089f4c1f2279d8f7f48f"} err="failed to get container status \"fb9aef1af4e74a3741b0a0ad050e87ecb9037f04b85e089f4c1f2279d8f7f48f\": rpc error: code = NotFound desc = an error occurred when try to find container \"fb9aef1af4e74a3741b0a0ad050e87ecb9037f04b85e089f4c1f2279d8f7f48f\": not found" Jan 30 13:54:32.795177 kubelet[2431]: I0130 13:54:32.795152 2431 scope.go:117] "RemoveContainer" containerID="8b51b920c79a264fa27601d8bba193f7bd9d6296f1eabfe54eef9e94c4aacc58" Jan 30 13:54:32.795570 containerd[1963]: time="2025-01-30T13:54:32.795521117Z" level=error msg="ContainerStatus for \"8b51b920c79a264fa27601d8bba193f7bd9d6296f1eabfe54eef9e94c4aacc58\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8b51b920c79a264fa27601d8bba193f7bd9d6296f1eabfe54eef9e94c4aacc58\": not found" Jan 30 13:54:32.795745 kubelet[2431]: E0130 13:54:32.795718 2431 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8b51b920c79a264fa27601d8bba193f7bd9d6296f1eabfe54eef9e94c4aacc58\": not found" containerID="8b51b920c79a264fa27601d8bba193f7bd9d6296f1eabfe54eef9e94c4aacc58" Jan 30 13:54:32.795837 kubelet[2431]: I0130 13:54:32.795751 2431 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8b51b920c79a264fa27601d8bba193f7bd9d6296f1eabfe54eef9e94c4aacc58"} err="failed to get container status \"8b51b920c79a264fa27601d8bba193f7bd9d6296f1eabfe54eef9e94c4aacc58\": rpc error: code = NotFound desc = an error occurred when try to find container \"8b51b920c79a264fa27601d8bba193f7bd9d6296f1eabfe54eef9e94c4aacc58\": not found" Jan 30 13:54:32.795837 kubelet[2431]: I0130 13:54:32.795774 2431 scope.go:117] "RemoveContainer" containerID="1fe2c9293acce427770100ec60394d923d35abb31dd021bcfbb2a554a2b1a7f7" Jan 30 13:54:32.796044 containerd[1963]: time="2025-01-30T13:54:32.796007664Z" level=error msg="ContainerStatus for \"1fe2c9293acce427770100ec60394d923d35abb31dd021bcfbb2a554a2b1a7f7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1fe2c9293acce427770100ec60394d923d35abb31dd021bcfbb2a554a2b1a7f7\": not found" Jan 30 13:54:32.796223 kubelet[2431]: E0130 13:54:32.796168 2431 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1fe2c9293acce427770100ec60394d923d35abb31dd021bcfbb2a554a2b1a7f7\": not found" containerID="1fe2c9293acce427770100ec60394d923d35abb31dd021bcfbb2a554a2b1a7f7" Jan 30 13:54:32.796223 kubelet[2431]: I0130 13:54:32.796204 2431 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1fe2c9293acce427770100ec60394d923d35abb31dd021bcfbb2a554a2b1a7f7"} err="failed to get container status \"1fe2c9293acce427770100ec60394d923d35abb31dd021bcfbb2a554a2b1a7f7\": rpc error: code = NotFound desc = an error occurred when try to find container \"1fe2c9293acce427770100ec60394d923d35abb31dd021bcfbb2a554a2b1a7f7\": not found" Jan 30 13:54:32.796361 kubelet[2431]: I0130 13:54:32.796224 2431 scope.go:117] "RemoveContainer" containerID="0210b39c53140e90758d5b291eb0f7fdec7037fe5d36c3c8e017222b8bc28542" Jan 30 13:54:32.796534 containerd[1963]: time="2025-01-30T13:54:32.796502166Z" level=error msg="ContainerStatus for \"0210b39c53140e90758d5b291eb0f7fdec7037fe5d36c3c8e017222b8bc28542\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0210b39c53140e90758d5b291eb0f7fdec7037fe5d36c3c8e017222b8bc28542\": not found" Jan 30 13:54:32.796637 kubelet[2431]: E0130 13:54:32.796609 2431 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0210b39c53140e90758d5b291eb0f7fdec7037fe5d36c3c8e017222b8bc28542\": not found" containerID="0210b39c53140e90758d5b291eb0f7fdec7037fe5d36c3c8e017222b8bc28542" Jan 30 13:54:32.796704 kubelet[2431]: I0130 13:54:32.796642 2431 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0210b39c53140e90758d5b291eb0f7fdec7037fe5d36c3c8e017222b8bc28542"} err="failed to get container status \"0210b39c53140e90758d5b291eb0f7fdec7037fe5d36c3c8e017222b8bc28542\": rpc error: code = NotFound desc = an error occurred when try to find container \"0210b39c53140e90758d5b291eb0f7fdec7037fe5d36c3c8e017222b8bc28542\": not found" Jan 30 13:54:32.796704 kubelet[2431]: I0130 13:54:32.796661 2431 scope.go:117] "RemoveContainer" containerID="18bc5a0f85c1c187cc311c3c42eb5e247daa7ab720d9d231b2846fd20914f319" Jan 30 13:54:32.796881 containerd[1963]: time="2025-01-30T13:54:32.796847187Z" level=error msg="ContainerStatus for \"18bc5a0f85c1c187cc311c3c42eb5e247daa7ab720d9d231b2846fd20914f319\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"18bc5a0f85c1c187cc311c3c42eb5e247daa7ab720d9d231b2846fd20914f319\": not found" Jan 30 13:54:32.797103 kubelet[2431]: E0130 13:54:32.797044 2431 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"18bc5a0f85c1c187cc311c3c42eb5e247daa7ab720d9d231b2846fd20914f319\": not found" containerID="18bc5a0f85c1c187cc311c3c42eb5e247daa7ab720d9d231b2846fd20914f319" Jan 30 13:54:32.797103 kubelet[2431]: I0130 13:54:32.797082 2431 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"18bc5a0f85c1c187cc311c3c42eb5e247daa7ab720d9d231b2846fd20914f319"} err="failed to get container status \"18bc5a0f85c1c187cc311c3c42eb5e247daa7ab720d9d231b2846fd20914f319\": rpc error: code = NotFound desc = an error occurred when try to find container \"18bc5a0f85c1c187cc311c3c42eb5e247daa7ab720d9d231b2846fd20914f319\": not found" Jan 30 13:54:32.937344 systemd[1]: var-lib-kubelet-pods-c768a57c\x2ddb80\x2d4717\x2db3a5\x2daf2e7334b2bd-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 13:54:32.937475 systemd[1]: var-lib-kubelet-pods-c768a57c\x2ddb80\x2d4717\x2db3a5\x2daf2e7334b2bd-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 13:54:33.307082 kubelet[2431]: E0130 13:54:33.307022 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:34.192922 kubelet[2431]: I0130 13:54:34.192876 2431 setters.go:600] "Node became not ready" node="172.31.27.35" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-30T13:54:34Z","lastTransitionTime":"2025-01-30T13:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 30 13:54:34.307400 kubelet[2431]: E0130 13:54:34.307342 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:34.398915 kubelet[2431]: I0130 13:54:34.398872 2431 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c768a57c-db80-4717-b3a5-af2e7334b2bd" path="/var/lib/kubelet/pods/c768a57c-db80-4717-b3a5-af2e7334b2bd/volumes" Jan 30 13:54:34.868859 ntpd[1940]: Deleting interface #12 lxc_health, fe80::1c4d:baff:fe0a:143d%7#123, interface stats: received=0, sent=0, dropped=0, active_time=37 secs Jan 30 13:54:34.869275 ntpd[1940]: 30 Jan 13:54:34 ntpd[1940]: Deleting interface #12 lxc_health, fe80::1c4d:baff:fe0a:143d%7#123, interface stats: received=0, sent=0, dropped=0, active_time=37 secs Jan 30 13:54:35.207497 kubelet[2431]: E0130 13:54:35.207454 2431 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c768a57c-db80-4717-b3a5-af2e7334b2bd" containerName="clean-cilium-state" Jan 30 13:54:35.207497 kubelet[2431]: E0130 13:54:35.207484 2431 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c768a57c-db80-4717-b3a5-af2e7334b2bd" containerName="cilium-agent" Jan 30 13:54:35.207497 kubelet[2431]: E0130 13:54:35.207494 2431 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c768a57c-db80-4717-b3a5-af2e7334b2bd" containerName="mount-cgroup" Jan 30 13:54:35.207497 kubelet[2431]: E0130 13:54:35.207505 2431 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c768a57c-db80-4717-b3a5-af2e7334b2bd" containerName="apply-sysctl-overwrites" Jan 30 13:54:35.207768 kubelet[2431]: E0130 13:54:35.207514 2431 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c768a57c-db80-4717-b3a5-af2e7334b2bd" containerName="mount-bpf-fs" Jan 30 13:54:35.207768 kubelet[2431]: I0130 13:54:35.207536 2431 memory_manager.go:354] "RemoveStaleState removing state" podUID="c768a57c-db80-4717-b3a5-af2e7334b2bd" containerName="cilium-agent" Jan 30 13:54:35.217325 systemd[1]: Created slice kubepods-besteffort-podef3e8607_9eb9_4709_ad75_8b9d3fb85897.slice - libcontainer container kubepods-besteffort-podef3e8607_9eb9_4709_ad75_8b9d3fb85897.slice. Jan 30 13:54:35.308199 kubelet[2431]: E0130 13:54:35.308142 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:35.335508 systemd[1]: Created slice kubepods-burstable-pod75c19ada_0347_4c08_8244_4ab98fc37298.slice - libcontainer container kubepods-burstable-pod75c19ada_0347_4c08_8244_4ab98fc37298.slice. Jan 30 13:54:35.390591 kubelet[2431]: I0130 13:54:35.390536 2431 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvpnv\" (UniqueName: \"kubernetes.io/projected/ef3e8607-9eb9-4709-ad75-8b9d3fb85897-kube-api-access-gvpnv\") pod \"cilium-operator-5d85765b45-s6g9r\" (UID: \"ef3e8607-9eb9-4709-ad75-8b9d3fb85897\") " pod="kube-system/cilium-operator-5d85765b45-s6g9r" Jan 30 13:54:35.390591 kubelet[2431]: I0130 13:54:35.390597 2431 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef3e8607-9eb9-4709-ad75-8b9d3fb85897-cilium-config-path\") pod \"cilium-operator-5d85765b45-s6g9r\" (UID: \"ef3e8607-9eb9-4709-ad75-8b9d3fb85897\") " pod="kube-system/cilium-operator-5d85765b45-s6g9r" Jan 30 13:54:35.491990 kubelet[2431]: I0130 13:54:35.491462 2431 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/75c19ada-0347-4c08-8244-4ab98fc37298-cilium-config-path\") pod \"cilium-mw4m4\" (UID: \"75c19ada-0347-4c08-8244-4ab98fc37298\") " pod="kube-system/cilium-mw4m4" Jan 30 13:54:35.491990 kubelet[2431]: I0130 13:54:35.491558 2431 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/75c19ada-0347-4c08-8244-4ab98fc37298-host-proc-sys-kernel\") pod \"cilium-mw4m4\" (UID: \"75c19ada-0347-4c08-8244-4ab98fc37298\") " pod="kube-system/cilium-mw4m4" Jan 30 13:54:35.491990 kubelet[2431]: I0130 13:54:35.491583 2431 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/75c19ada-0347-4c08-8244-4ab98fc37298-hubble-tls\") pod \"cilium-mw4m4\" (UID: \"75c19ada-0347-4c08-8244-4ab98fc37298\") " pod="kube-system/cilium-mw4m4" Jan 30 13:54:35.491990 kubelet[2431]: I0130 13:54:35.491606 2431 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/75c19ada-0347-4c08-8244-4ab98fc37298-hostproc\") pod \"cilium-mw4m4\" (UID: \"75c19ada-0347-4c08-8244-4ab98fc37298\") " pod="kube-system/cilium-mw4m4" Jan 30 13:54:35.491990 kubelet[2431]: I0130 13:54:35.491627 2431 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/75c19ada-0347-4c08-8244-4ab98fc37298-cni-path\") pod \"cilium-mw4m4\" (UID: \"75c19ada-0347-4c08-8244-4ab98fc37298\") " pod="kube-system/cilium-mw4m4" Jan 30 13:54:35.491990 kubelet[2431]: I0130 13:54:35.491649 2431 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75c19ada-0347-4c08-8244-4ab98fc37298-lib-modules\") pod \"cilium-mw4m4\" (UID: \"75c19ada-0347-4c08-8244-4ab98fc37298\") " pod="kube-system/cilium-mw4m4" Jan 30 13:54:35.492405 kubelet[2431]: I0130 13:54:35.491670 2431 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/75c19ada-0347-4c08-8244-4ab98fc37298-cilium-run\") pod \"cilium-mw4m4\" (UID: \"75c19ada-0347-4c08-8244-4ab98fc37298\") " pod="kube-system/cilium-mw4m4" Jan 30 13:54:35.492405 kubelet[2431]: I0130 13:54:35.491692 2431 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/75c19ada-0347-4c08-8244-4ab98fc37298-bpf-maps\") pod \"cilium-mw4m4\" (UID: \"75c19ada-0347-4c08-8244-4ab98fc37298\") " pod="kube-system/cilium-mw4m4" Jan 30 13:54:35.492405 kubelet[2431]: I0130 13:54:35.491764 2431 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/75c19ada-0347-4c08-8244-4ab98fc37298-clustermesh-secrets\") pod \"cilium-mw4m4\" (UID: \"75c19ada-0347-4c08-8244-4ab98fc37298\") " pod="kube-system/cilium-mw4m4" Jan 30 13:54:35.492405 kubelet[2431]: I0130 13:54:35.491808 2431 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/75c19ada-0347-4c08-8244-4ab98fc37298-host-proc-sys-net\") pod \"cilium-mw4m4\" (UID: \"75c19ada-0347-4c08-8244-4ab98fc37298\") " pod="kube-system/cilium-mw4m4" Jan 30 13:54:35.492405 kubelet[2431]: I0130 13:54:35.491876 2431 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/75c19ada-0347-4c08-8244-4ab98fc37298-cilium-cgroup\") pod \"cilium-mw4m4\" (UID: \"75c19ada-0347-4c08-8244-4ab98fc37298\") " pod="kube-system/cilium-mw4m4" Jan 30 13:54:35.492405 kubelet[2431]: I0130 13:54:35.491904 2431 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/75c19ada-0347-4c08-8244-4ab98fc37298-etc-cni-netd\") pod \"cilium-mw4m4\" (UID: \"75c19ada-0347-4c08-8244-4ab98fc37298\") " pod="kube-system/cilium-mw4m4" Jan 30 13:54:35.492654 kubelet[2431]: I0130 13:54:35.491925 2431 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/75c19ada-0347-4c08-8244-4ab98fc37298-cilium-ipsec-secrets\") pod \"cilium-mw4m4\" (UID: \"75c19ada-0347-4c08-8244-4ab98fc37298\") " pod="kube-system/cilium-mw4m4" Jan 30 13:54:35.492654 kubelet[2431]: I0130 13:54:35.491998 2431 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75c19ada-0347-4c08-8244-4ab98fc37298-xtables-lock\") pod \"cilium-mw4m4\" (UID: \"75c19ada-0347-4c08-8244-4ab98fc37298\") " pod="kube-system/cilium-mw4m4" Jan 30 13:54:35.492654 kubelet[2431]: I0130 13:54:35.492019 2431 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svqb9\" (UniqueName: \"kubernetes.io/projected/75c19ada-0347-4c08-8244-4ab98fc37298-kube-api-access-svqb9\") pod \"cilium-mw4m4\" (UID: \"75c19ada-0347-4c08-8244-4ab98fc37298\") " pod="kube-system/cilium-mw4m4" Jan 30 13:54:35.531077 containerd[1963]: time="2025-01-30T13:54:35.526296635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-s6g9r,Uid:ef3e8607-9eb9-4709-ad75-8b9d3fb85897,Namespace:kube-system,Attempt:0,}" Jan 30 13:54:35.573070 containerd[1963]: time="2025-01-30T13:54:35.572906357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:54:35.573070 containerd[1963]: time="2025-01-30T13:54:35.573056921Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:54:35.573362 containerd[1963]: time="2025-01-30T13:54:35.573117873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:35.573627 containerd[1963]: time="2025-01-30T13:54:35.573576107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:35.619840 systemd[1]: Started cri-containerd-ec9697235d4fb999166ee53b657c79ab0a9bff3f960d1840827a13e1862ea1d4.scope - libcontainer container ec9697235d4fb999166ee53b657c79ab0a9bff3f960d1840827a13e1862ea1d4. Jan 30 13:54:35.650966 containerd[1963]: time="2025-01-30T13:54:35.650837598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mw4m4,Uid:75c19ada-0347-4c08-8244-4ab98fc37298,Namespace:kube-system,Attempt:0,}" Jan 30 13:54:35.690411 containerd[1963]: time="2025-01-30T13:54:35.690224466Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:54:35.690411 containerd[1963]: time="2025-01-30T13:54:35.690331214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:54:35.690411 containerd[1963]: time="2025-01-30T13:54:35.690387713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:35.693539 containerd[1963]: time="2025-01-30T13:54:35.693345160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:54:35.724311 systemd[1]: Started cri-containerd-165c4770326f4ac1d09a96c83cf2c82754c604e7ead576f0f9573949210a03d0.scope - libcontainer container 165c4770326f4ac1d09a96c83cf2c82754c604e7ead576f0f9573949210a03d0. Jan 30 13:54:35.726013 containerd[1963]: time="2025-01-30T13:54:35.725965661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-s6g9r,Uid:ef3e8607-9eb9-4709-ad75-8b9d3fb85897,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec9697235d4fb999166ee53b657c79ab0a9bff3f960d1840827a13e1862ea1d4\"" Jan 30 13:54:35.730458 containerd[1963]: time="2025-01-30T13:54:35.730415717Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 13:54:35.777646 containerd[1963]: time="2025-01-30T13:54:35.777268824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mw4m4,Uid:75c19ada-0347-4c08-8244-4ab98fc37298,Namespace:kube-system,Attempt:0,} returns sandbox id \"165c4770326f4ac1d09a96c83cf2c82754c604e7ead576f0f9573949210a03d0\"" Jan 30 13:54:35.780874 containerd[1963]: time="2025-01-30T13:54:35.780711811Z" level=info msg="CreateContainer within sandbox \"165c4770326f4ac1d09a96c83cf2c82754c604e7ead576f0f9573949210a03d0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:54:35.804058 containerd[1963]: time="2025-01-30T13:54:35.804017314Z" level=info msg="CreateContainer within sandbox \"165c4770326f4ac1d09a96c83cf2c82754c604e7ead576f0f9573949210a03d0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"19d1508579d4da881a70e17cd597ff1ed0091cefbfc7fb036eb70dbc5392e407\"" Jan 30 13:54:35.804859 containerd[1963]: time="2025-01-30T13:54:35.804818718Z" level=info msg="StartContainer for \"19d1508579d4da881a70e17cd597ff1ed0091cefbfc7fb036eb70dbc5392e407\"" Jan 30 13:54:35.840707 systemd[1]: Started cri-containerd-19d1508579d4da881a70e17cd597ff1ed0091cefbfc7fb036eb70dbc5392e407.scope - libcontainer container 19d1508579d4da881a70e17cd597ff1ed0091cefbfc7fb036eb70dbc5392e407. Jan 30 13:54:35.873933 containerd[1963]: time="2025-01-30T13:54:35.873858854Z" level=info msg="StartContainer for \"19d1508579d4da881a70e17cd597ff1ed0091cefbfc7fb036eb70dbc5392e407\" returns successfully" Jan 30 13:54:35.973929 systemd[1]: cri-containerd-19d1508579d4da881a70e17cd597ff1ed0091cefbfc7fb036eb70dbc5392e407.scope: Deactivated successfully. Jan 30 13:54:36.027391 containerd[1963]: time="2025-01-30T13:54:36.027327419Z" level=info msg="shim disconnected" id=19d1508579d4da881a70e17cd597ff1ed0091cefbfc7fb036eb70dbc5392e407 namespace=k8s.io Jan 30 13:54:36.027391 containerd[1963]: time="2025-01-30T13:54:36.027385386Z" level=warning msg="cleaning up after shim disconnected" id=19d1508579d4da881a70e17cd597ff1ed0091cefbfc7fb036eb70dbc5392e407 namespace=k8s.io Jan 30 13:54:36.027391 containerd[1963]: time="2025-01-30T13:54:36.027396416Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:54:36.308987 kubelet[2431]: E0130 13:54:36.308915 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:36.736820 containerd[1963]: time="2025-01-30T13:54:36.736778160Z" level=info msg="CreateContainer within sandbox \"165c4770326f4ac1d09a96c83cf2c82754c604e7ead576f0f9573949210a03d0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:54:36.768525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3692190121.mount: Deactivated successfully. Jan 30 13:54:36.770342 containerd[1963]: time="2025-01-30T13:54:36.770082955Z" level=info msg="CreateContainer within sandbox \"165c4770326f4ac1d09a96c83cf2c82754c604e7ead576f0f9573949210a03d0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b4a37074e1be4d6353d0999830b113f6f0f5f0cadd3319890439dc8b7e11c509\"" Jan 30 13:54:36.772010 containerd[1963]: time="2025-01-30T13:54:36.771826724Z" level=info msg="StartContainer for \"b4a37074e1be4d6353d0999830b113f6f0f5f0cadd3319890439dc8b7e11c509\"" Jan 30 13:54:36.836279 systemd[1]: Started cri-containerd-b4a37074e1be4d6353d0999830b113f6f0f5f0cadd3319890439dc8b7e11c509.scope - libcontainer container b4a37074e1be4d6353d0999830b113f6f0f5f0cadd3319890439dc8b7e11c509. Jan 30 13:54:36.922015 containerd[1963]: time="2025-01-30T13:54:36.921628093Z" level=info msg="StartContainer for \"b4a37074e1be4d6353d0999830b113f6f0f5f0cadd3319890439dc8b7e11c509\" returns successfully" Jan 30 13:54:36.939189 systemd[1]: cri-containerd-b4a37074e1be4d6353d0999830b113f6f0f5f0cadd3319890439dc8b7e11c509.scope: Deactivated successfully. Jan 30 13:54:36.979265 containerd[1963]: time="2025-01-30T13:54:36.978637741Z" level=info msg="shim disconnected" id=b4a37074e1be4d6353d0999830b113f6f0f5f0cadd3319890439dc8b7e11c509 namespace=k8s.io Jan 30 13:54:36.979265 containerd[1963]: time="2025-01-30T13:54:36.978696358Z" level=warning msg="cleaning up after shim disconnected" id=b4a37074e1be4d6353d0999830b113f6f0f5f0cadd3319890439dc8b7e11c509 namespace=k8s.io Jan 30 13:54:36.979265 containerd[1963]: time="2025-01-30T13:54:36.978708684Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:54:37.310085 kubelet[2431]: E0130 13:54:37.309923 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:37.392112 kubelet[2431]: E0130 13:54:37.391682 2431 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 13:54:37.515876 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4a37074e1be4d6353d0999830b113f6f0f5f0cadd3319890439dc8b7e11c509-rootfs.mount: Deactivated successfully. Jan 30 13:54:37.641682 containerd[1963]: time="2025-01-30T13:54:37.641543767Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:37.643672 containerd[1963]: time="2025-01-30T13:54:37.643512852Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 30 13:54:37.645917 containerd[1963]: time="2025-01-30T13:54:37.645535848Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:54:37.647369 containerd[1963]: time="2025-01-30T13:54:37.647330592Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.916700352s" Jan 30 13:54:37.647524 containerd[1963]: time="2025-01-30T13:54:37.647501274Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 30 13:54:37.649767 containerd[1963]: time="2025-01-30T13:54:37.649734907Z" level=info msg="CreateContainer within sandbox \"ec9697235d4fb999166ee53b657c79ab0a9bff3f960d1840827a13e1862ea1d4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 13:54:37.676495 containerd[1963]: time="2025-01-30T13:54:37.676444063Z" level=info msg="CreateContainer within sandbox \"ec9697235d4fb999166ee53b657c79ab0a9bff3f960d1840827a13e1862ea1d4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7faa63b8eb518f712e5a7e27d7096551040545981c81f2a5211789f8e766ef6e\"" Jan 30 13:54:37.678010 containerd[1963]: time="2025-01-30T13:54:37.677250345Z" level=info msg="StartContainer for \"7faa63b8eb518f712e5a7e27d7096551040545981c81f2a5211789f8e766ef6e\"" Jan 30 13:54:37.714207 systemd[1]: Started cri-containerd-7faa63b8eb518f712e5a7e27d7096551040545981c81f2a5211789f8e766ef6e.scope - libcontainer container 7faa63b8eb518f712e5a7e27d7096551040545981c81f2a5211789f8e766ef6e. Jan 30 13:54:37.745016 containerd[1963]: time="2025-01-30T13:54:37.744717727Z" level=info msg="CreateContainer within sandbox \"165c4770326f4ac1d09a96c83cf2c82754c604e7ead576f0f9573949210a03d0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:54:37.759151 containerd[1963]: time="2025-01-30T13:54:37.759103756Z" level=info msg="StartContainer for \"7faa63b8eb518f712e5a7e27d7096551040545981c81f2a5211789f8e766ef6e\" returns successfully" Jan 30 13:54:37.853845 containerd[1963]: time="2025-01-30T13:54:37.853792711Z" level=info msg="CreateContainer within sandbox \"165c4770326f4ac1d09a96c83cf2c82754c604e7ead576f0f9573949210a03d0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"72c06e36218bc23644bdd3cd0532ab9a618027f8ef106ab359e07bfa782643f6\"" Jan 30 13:54:37.855379 containerd[1963]: time="2025-01-30T13:54:37.854778672Z" level=info msg="StartContainer for \"72c06e36218bc23644bdd3cd0532ab9a618027f8ef106ab359e07bfa782643f6\"" Jan 30 13:54:37.894215 systemd[1]: Started cri-containerd-72c06e36218bc23644bdd3cd0532ab9a618027f8ef106ab359e07bfa782643f6.scope - libcontainer container 72c06e36218bc23644bdd3cd0532ab9a618027f8ef106ab359e07bfa782643f6. Jan 30 13:54:37.980726 containerd[1963]: time="2025-01-30T13:54:37.979224750Z" level=info msg="StartContainer for \"72c06e36218bc23644bdd3cd0532ab9a618027f8ef106ab359e07bfa782643f6\" returns successfully" Jan 30 13:54:38.092022 systemd[1]: cri-containerd-72c06e36218bc23644bdd3cd0532ab9a618027f8ef106ab359e07bfa782643f6.scope: Deactivated successfully. Jan 30 13:54:38.131893 containerd[1963]: time="2025-01-30T13:54:38.131811855Z" level=info msg="shim disconnected" id=72c06e36218bc23644bdd3cd0532ab9a618027f8ef106ab359e07bfa782643f6 namespace=k8s.io Jan 30 13:54:38.131893 containerd[1963]: time="2025-01-30T13:54:38.131894017Z" level=warning msg="cleaning up after shim disconnected" id=72c06e36218bc23644bdd3cd0532ab9a618027f8ef106ab359e07bfa782643f6 namespace=k8s.io Jan 30 13:54:38.132395 containerd[1963]: time="2025-01-30T13:54:38.131907553Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:54:38.310750 kubelet[2431]: E0130 13:54:38.310670 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:38.758068 kubelet[2431]: I0130 13:54:38.758008 2431 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-s6g9r" podStartSLOduration=1.839290466 podStartE2EDuration="3.757989604s" podCreationTimestamp="2025-01-30 13:54:35 +0000 UTC" firstStartedPulling="2025-01-30 13:54:35.729639222 +0000 UTC m=+64.050553405" lastFinishedPulling="2025-01-30 13:54:37.648338367 +0000 UTC m=+65.969252543" observedRunningTime="2025-01-30 13:54:38.757648422 +0000 UTC m=+67.078562618" watchObservedRunningTime="2025-01-30 13:54:38.757989604 +0000 UTC m=+67.078903794" Jan 30 13:54:38.765695 containerd[1963]: time="2025-01-30T13:54:38.765639537Z" level=info msg="CreateContainer within sandbox \"165c4770326f4ac1d09a96c83cf2c82754c604e7ead576f0f9573949210a03d0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:54:38.806158 containerd[1963]: time="2025-01-30T13:54:38.806109858Z" level=info msg="CreateContainer within sandbox \"165c4770326f4ac1d09a96c83cf2c82754c604e7ead576f0f9573949210a03d0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"85bbdb45d9c476b6b8c5ecdd2f86ca6147cc2f11bb44d6517ee520d82f99874a\"" Jan 30 13:54:38.807000 containerd[1963]: time="2025-01-30T13:54:38.806936222Z" level=info msg="StartContainer for \"85bbdb45d9c476b6b8c5ecdd2f86ca6147cc2f11bb44d6517ee520d82f99874a\"" Jan 30 13:54:38.866240 systemd[1]: Started cri-containerd-85bbdb45d9c476b6b8c5ecdd2f86ca6147cc2f11bb44d6517ee520d82f99874a.scope - libcontainer container 85bbdb45d9c476b6b8c5ecdd2f86ca6147cc2f11bb44d6517ee520d82f99874a. Jan 30 13:54:38.903802 systemd[1]: cri-containerd-85bbdb45d9c476b6b8c5ecdd2f86ca6147cc2f11bb44d6517ee520d82f99874a.scope: Deactivated successfully. Jan 30 13:54:38.909730 containerd[1963]: time="2025-01-30T13:54:38.909519606Z" level=info msg="StartContainer for \"85bbdb45d9c476b6b8c5ecdd2f86ca6147cc2f11bb44d6517ee520d82f99874a\" returns successfully" Jan 30 13:54:38.960283 containerd[1963]: time="2025-01-30T13:54:38.960201940Z" level=info msg="shim disconnected" id=85bbdb45d9c476b6b8c5ecdd2f86ca6147cc2f11bb44d6517ee520d82f99874a namespace=k8s.io Jan 30 13:54:38.960283 containerd[1963]: time="2025-01-30T13:54:38.960277480Z" level=warning msg="cleaning up after shim disconnected" id=85bbdb45d9c476b6b8c5ecdd2f86ca6147cc2f11bb44d6517ee520d82f99874a namespace=k8s.io Jan 30 13:54:38.960283 containerd[1963]: time="2025-01-30T13:54:38.960290202Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:54:39.311528 kubelet[2431]: E0130 13:54:39.311465 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:39.515420 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-85bbdb45d9c476b6b8c5ecdd2f86ca6147cc2f11bb44d6517ee520d82f99874a-rootfs.mount: Deactivated successfully. Jan 30 13:54:39.771358 containerd[1963]: time="2025-01-30T13:54:39.771314774Z" level=info msg="CreateContainer within sandbox \"165c4770326f4ac1d09a96c83cf2c82754c604e7ead576f0f9573949210a03d0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:54:39.866071 containerd[1963]: time="2025-01-30T13:54:39.865757114Z" level=info msg="CreateContainer within sandbox \"165c4770326f4ac1d09a96c83cf2c82754c604e7ead576f0f9573949210a03d0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b863ce72dd7a445ce0e047b5e0111c83cf5a93be02caab680c21b0c477a267dd\"" Jan 30 13:54:39.871881 containerd[1963]: time="2025-01-30T13:54:39.871161521Z" level=info msg="StartContainer for \"b863ce72dd7a445ce0e047b5e0111c83cf5a93be02caab680c21b0c477a267dd\"" Jan 30 13:54:39.917541 systemd[1]: Started cri-containerd-b863ce72dd7a445ce0e047b5e0111c83cf5a93be02caab680c21b0c477a267dd.scope - libcontainer container b863ce72dd7a445ce0e047b5e0111c83cf5a93be02caab680c21b0c477a267dd. Jan 30 13:54:39.959425 containerd[1963]: time="2025-01-30T13:54:39.959377737Z" level=info msg="StartContainer for \"b863ce72dd7a445ce0e047b5e0111c83cf5a93be02caab680c21b0c477a267dd\" returns successfully" Jan 30 13:54:40.312382 kubelet[2431]: E0130 13:54:40.312321 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:40.891615 kubelet[2431]: I0130 13:54:40.891338 2431 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mw4m4" podStartSLOduration=5.87306693 podStartE2EDuration="5.87306693s" podCreationTimestamp="2025-01-30 13:54:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:54:40.8514552 +0000 UTC m=+69.172369396" watchObservedRunningTime="2025-01-30 13:54:40.87306693 +0000 UTC m=+69.193981127" Jan 30 13:54:40.956051 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 30 13:54:41.312772 kubelet[2431]: E0130 13:54:41.312709 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:42.313709 kubelet[2431]: E0130 13:54:42.313661 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:43.314669 kubelet[2431]: E0130 13:54:43.314554 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:44.315318 kubelet[2431]: E0130 13:54:44.315280 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:44.635262 (udev-worker)[5215]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:54:44.641545 (udev-worker)[5216]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:54:44.670896 systemd-networkd[1801]: lxc_health: Link UP Jan 30 13:54:44.678698 systemd-networkd[1801]: lxc_health: Gained carrier Jan 30 13:54:45.316114 kubelet[2431]: E0130 13:54:45.316054 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:46.109271 systemd-networkd[1801]: lxc_health: Gained IPv6LL Jan 30 13:54:46.316913 kubelet[2431]: E0130 13:54:46.316863 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:47.317463 kubelet[2431]: E0130 13:54:47.317384 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:47.373832 systemd[1]: run-containerd-runc-k8s.io-b863ce72dd7a445ce0e047b5e0111c83cf5a93be02caab680c21b0c477a267dd-runc.EcUqZL.mount: Deactivated successfully. Jan 30 13:54:48.318458 kubelet[2431]: E0130 13:54:48.318405 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:48.868918 ntpd[1940]: Listen normally on 16 lxc_health [fe80::9471:37ff:fea4:1ff2%15]:123 Jan 30 13:54:48.869488 ntpd[1940]: 30 Jan 13:54:48 ntpd[1940]: Listen normally on 16 lxc_health [fe80::9471:37ff:fea4:1ff2%15]:123 Jan 30 13:54:49.319361 kubelet[2431]: E0130 13:54:49.319303 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:50.319516 kubelet[2431]: E0130 13:54:50.319460 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:51.320126 kubelet[2431]: E0130 13:54:51.320070 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:52.259657 kubelet[2431]: E0130 13:54:52.259603 2431 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:52.320409 kubelet[2431]: E0130 13:54:52.320363 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:53.321142 kubelet[2431]: E0130 13:54:53.321079 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:54.322147 kubelet[2431]: E0130 13:54:54.322088 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:55.322989 kubelet[2431]: E0130 13:54:55.322914 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:56.323961 kubelet[2431]: E0130 13:54:56.323902 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:57.325061 kubelet[2431]: E0130 13:54:57.325006 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:58.326010 kubelet[2431]: E0130 13:54:58.325941 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:54:59.327107 kubelet[2431]: E0130 13:54:59.327049 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:00.327616 kubelet[2431]: E0130 13:55:00.327555 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:01.328742 kubelet[2431]: E0130 13:55:01.328681 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:02.328872 kubelet[2431]: E0130 13:55:02.328818 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:03.329536 kubelet[2431]: E0130 13:55:03.329477 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:03.769237 kubelet[2431]: E0130 13:55:03.769170 2431 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.35?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 13:55:04.330151 kubelet[2431]: E0130 13:55:04.330093 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:05.330722 kubelet[2431]: E0130 13:55:05.330671 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:06.330964 kubelet[2431]: E0130 13:55:06.330904 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:07.331446 kubelet[2431]: E0130 13:55:07.331373 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:08.332554 kubelet[2431]: E0130 13:55:08.332497 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:09.333263 kubelet[2431]: E0130 13:55:09.333205 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:10.333917 kubelet[2431]: E0130 13:55:10.333859 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:11.334915 kubelet[2431]: E0130 13:55:11.334848 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:12.259337 kubelet[2431]: E0130 13:55:12.259272 2431 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:12.335246 kubelet[2431]: E0130 13:55:12.335198 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:13.336198 kubelet[2431]: E0130 13:55:13.336141 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:13.770238 kubelet[2431]: E0130 13:55:13.770184 2431 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.35?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 13:55:14.337102 kubelet[2431]: E0130 13:55:14.337045 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:14.910719 kubelet[2431]: E0130 13:55:14.910642 2431 kubelet_node_status.go:535] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-01-30T13:55:04Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-01-30T13:55:04Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-01-30T13:55:04Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-01-30T13:55:04Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"],\\\"sizeBytes\\\":166719855},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":91036984},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":71015439},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\\\",\\\"registry.k8s.io/kube-proxy:v1.31.5\\\"],\\\"sizeBytes\\\":30230147},{\\\"names\\\":[\\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"],\\\"sizeBytes\\\":18897442},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":311286}]}}\" for node \"172.31.27.35\": Patch \"https://172.31.18.59:6443/api/v1/nodes/172.31.27.35/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 13:55:15.337483 kubelet[2431]: E0130 13:55:15.337424 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:16.338339 kubelet[2431]: E0130 13:55:16.338281 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:17.338916 kubelet[2431]: E0130 13:55:17.338860 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:18.339958 kubelet[2431]: E0130 13:55:18.339904 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:19.340132 kubelet[2431]: E0130 13:55:19.340077 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:20.340919 kubelet[2431]: E0130 13:55:20.340863 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:21.341923 kubelet[2431]: E0130 13:55:21.341878 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:22.342364 kubelet[2431]: E0130 13:55:22.342319 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:23.343136 kubelet[2431]: E0130 13:55:23.343076 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:23.567044 kubelet[2431]: E0130 13:55:23.566070 2431 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.35?timeout=10s\": unexpected EOF" Jan 30 13:55:23.567240 kubelet[2431]: E0130 13:55:23.567164 2431 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.35?timeout=10s\": dial tcp 172.31.18.59:6443: connect: connection refused" Jan 30 13:55:23.570550 kubelet[2431]: E0130 13:55:23.570514 2431 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.35?timeout=10s\": dial tcp 172.31.18.59:6443: connect: connection refused" Jan 30 13:55:23.570550 kubelet[2431]: I0130 13:55:23.570554 2431 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 30 13:55:23.575000 kubelet[2431]: E0130 13:55:23.574626 2431 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.35?timeout=10s\": dial tcp 172.31.18.59:6443: connect: connection refused" interval="200ms" Jan 30 13:55:23.776133 kubelet[2431]: E0130 13:55:23.776082 2431 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.35?timeout=10s\": dial tcp 172.31.18.59:6443: connect: connection refused" interval="400ms" Jan 30 13:55:24.177279 kubelet[2431]: E0130 13:55:24.177147 2431 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.35?timeout=10s\": dial tcp 172.31.18.59:6443: connect: connection refused" interval="800ms" Jan 30 13:55:24.343472 kubelet[2431]: E0130 13:55:24.343411 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:24.568434 kubelet[2431]: E0130 13:55:24.568395 2431 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.27.35\": Get \"https://172.31.18.59:6443/api/v1/nodes/172.31.27.35?timeout=10s\": dial tcp 172.31.18.59:6443: connect: connection refused - error from a previous attempt: unexpected EOF" Jan 30 13:55:24.569045 kubelet[2431]: E0130 13:55:24.569011 2431 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.27.35\": Get \"https://172.31.18.59:6443/api/v1/nodes/172.31.27.35?timeout=10s\": dial tcp 172.31.18.59:6443: connect: connection refused" Jan 30 13:55:24.569692 kubelet[2431]: E0130 13:55:24.569665 2431 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.27.35\": Get \"https://172.31.18.59:6443/api/v1/nodes/172.31.27.35?timeout=10s\": dial tcp 172.31.18.59:6443: connect: connection refused" Jan 30 13:55:24.570731 kubelet[2431]: E0130 13:55:24.570701 2431 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.27.35\": Get \"https://172.31.18.59:6443/api/v1/nodes/172.31.27.35?timeout=10s\": dial tcp 172.31.18.59:6443: connect: connection refused" Jan 30 13:55:24.570731 kubelet[2431]: E0130 13:55:24.570725 2431 kubelet_node_status.go:522] "Unable to update node status" err="update node status exceeds retry count" Jan 30 13:55:25.343760 kubelet[2431]: E0130 13:55:25.343718 2431 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:55:25.619186 systemd-logind[1944]: Power key pressed short. Jan 30 13:55:25.619205 systemd-logind[1944]: Powering off...