Jan 29 16:28:36.105027 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 14:51:22 -00 2025 Jan 29 16:28:36.105081 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:28:36.105096 kernel: BIOS-provided physical RAM map: Jan 29 16:28:36.105107 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 16:28:36.105118 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 16:28:36.105129 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 16:28:36.105145 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jan 29 16:28:36.105159 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jan 29 16:28:36.105489 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jan 29 16:28:36.105533 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 16:28:36.105850 kernel: NX (Execute Disable) protection: active Jan 29 16:28:36.105864 kernel: APIC: Static calls initialized Jan 29 16:28:36.105877 kernel: SMBIOS 2.7 present. Jan 29 16:28:36.105890 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 29 16:28:36.105912 kernel: Hypervisor detected: KVM Jan 29 16:28:36.105969 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 16:28:36.105983 kernel: kvm-clock: using sched offset of 8863172775 cycles Jan 29 16:28:36.105999 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 16:28:36.106013 kernel: tsc: Detected 2499.998 MHz processor Jan 29 16:28:36.106027 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 16:28:36.106334 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 16:28:36.106407 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jan 29 16:28:36.106425 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 16:28:36.106482 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 16:28:36.106499 kernel: Using GB pages for direct mapping Jan 29 16:28:36.106513 kernel: ACPI: Early table checksum verification disabled Jan 29 16:28:36.106569 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jan 29 16:28:36.106586 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jan 29 16:28:36.106601 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 29 16:28:36.106656 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 29 16:28:36.106678 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jan 29 16:28:36.106693 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 29 16:28:36.106749 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 29 16:28:36.106765 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 29 16:28:36.106780 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 29 16:28:36.106836 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 29 16:28:36.106949 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 29 16:28:36.107005 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 29 16:28:36.107022 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jan 29 16:28:36.107043 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jan 29 16:28:36.107112 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jan 29 16:28:36.107130 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jan 29 16:28:36.107185 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jan 29 16:28:36.107203 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jan 29 16:28:36.107484 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jan 29 16:28:36.107510 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jan 29 16:28:36.107527 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jan 29 16:28:36.107581 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jan 29 16:28:36.107597 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 29 16:28:36.107612 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 29 16:28:36.107626 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 29 16:28:36.107641 kernel: NUMA: Initialized distance table, cnt=1 Jan 29 16:28:36.107655 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jan 29 16:28:36.107676 kernel: Zone ranges: Jan 29 16:28:36.107691 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 16:28:36.107706 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jan 29 16:28:36.107720 kernel: Normal empty Jan 29 16:28:36.107744 kernel: Movable zone start for each node Jan 29 16:28:36.107759 kernel: Early memory node ranges Jan 29 16:28:36.107773 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 16:28:36.107787 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jan 29 16:28:36.107802 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jan 29 16:28:36.107816 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 16:28:36.107834 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 16:28:36.107849 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jan 29 16:28:36.107863 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 29 16:28:36.107877 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 16:28:36.107892 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 29 16:28:36.107906 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 16:28:36.108314 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 16:28:36.108330 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 16:28:36.108342 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 16:28:36.108361 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 16:28:36.108373 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 16:28:36.108385 kernel: TSC deadline timer available Jan 29 16:28:36.108397 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 16:28:36.108411 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 16:28:36.108423 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 29 16:28:36.108437 kernel: Booting paravirtualized kernel on KVM Jan 29 16:28:36.108453 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 16:28:36.108467 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 16:28:36.108486 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 16:28:36.108502 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 16:28:36.108518 kernel: pcpu-alloc: [0] 0 1 Jan 29 16:28:36.108530 kernel: kvm-guest: PV spinlocks enabled Jan 29 16:28:36.108545 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 16:28:36.108560 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:28:36.108608 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 16:28:36.108622 kernel: random: crng init done Jan 29 16:28:36.108639 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 16:28:36.108653 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 16:28:36.108670 kernel: Fallback order for Node 0: 0 Jan 29 16:28:36.108689 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jan 29 16:28:36.108737 kernel: Policy zone: DMA32 Jan 29 16:28:36.108754 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 16:28:36.108868 kernel: Memory: 1930300K/2057760K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43472K init, 1600K bss, 127200K reserved, 0K cma-reserved) Jan 29 16:28:36.108922 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 16:28:36.108936 kernel: Kernel/User page tables isolation: enabled Jan 29 16:28:36.108955 kernel: ftrace: allocating 37893 entries in 149 pages Jan 29 16:28:36.108968 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 16:28:36.108981 kernel: Dynamic Preempt: voluntary Jan 29 16:28:36.108996 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 16:28:36.109011 kernel: rcu: RCU event tracing is enabled. Jan 29 16:28:36.109026 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 16:28:36.109042 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 16:28:36.109057 kernel: Rude variant of Tasks RCU enabled. Jan 29 16:28:36.109090 kernel: Tracing variant of Tasks RCU enabled. Jan 29 16:28:36.109108 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 16:28:36.109123 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 16:28:36.109137 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 29 16:28:36.109152 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 16:28:36.109166 kernel: Console: colour VGA+ 80x25 Jan 29 16:28:36.109180 kernel: printk: console [ttyS0] enabled Jan 29 16:28:36.109195 kernel: ACPI: Core revision 20230628 Jan 29 16:28:36.109210 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 29 16:28:36.109224 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 16:28:36.109241 kernel: x2apic enabled Jan 29 16:28:36.109256 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 16:28:36.109282 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 29 16:28:36.109300 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jan 29 16:28:36.109316 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 29 16:28:36.109331 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 29 16:28:36.109345 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 16:28:36.109361 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 16:28:36.109375 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 16:28:36.109390 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 16:28:36.109406 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 29 16:28:36.109421 kernel: RETBleed: Vulnerable Jan 29 16:28:36.109436 kernel: Speculative Store Bypass: Vulnerable Jan 29 16:28:36.109453 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 16:28:36.109468 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 16:28:36.109483 kernel: GDS: Unknown: Dependent on hypervisor status Jan 29 16:28:36.109498 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 16:28:36.109513 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 16:28:36.109529 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 16:28:36.109546 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 29 16:28:36.109561 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 29 16:28:36.109577 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 29 16:28:36.109592 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 29 16:28:36.109606 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 29 16:28:36.109621 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 29 16:28:36.109636 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 16:28:36.109651 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 29 16:28:36.109665 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 29 16:28:36.109680 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 29 16:28:36.109696 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 29 16:28:36.109714 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 29 16:28:36.109729 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 29 16:28:36.109746 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 29 16:28:36.109759 kernel: Freeing SMP alternatives memory: 32K Jan 29 16:28:36.109843 kernel: pid_max: default: 32768 minimum: 301 Jan 29 16:28:36.109861 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 16:28:36.109875 kernel: landlock: Up and running. Jan 29 16:28:36.109888 kernel: SELinux: Initializing. Jan 29 16:28:36.109902 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 16:28:36.109916 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 16:28:36.109930 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 29 16:28:36.109948 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:28:36.109963 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:28:36.110055 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:28:36.110099 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 29 16:28:36.110112 kernel: signal: max sigframe size: 3632 Jan 29 16:28:36.110125 kernel: rcu: Hierarchical SRCU implementation. Jan 29 16:28:36.110139 kernel: rcu: Max phase no-delay instances is 400. Jan 29 16:28:36.110153 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 29 16:28:36.110166 kernel: smp: Bringing up secondary CPUs ... Jan 29 16:28:36.110183 kernel: smpboot: x86: Booting SMP configuration: Jan 29 16:28:36.110196 kernel: .... node #0, CPUs: #1 Jan 29 16:28:36.110211 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 29 16:28:36.110227 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 29 16:28:36.110243 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 16:28:36.110256 kernel: smpboot: Max logical packages: 1 Jan 29 16:28:36.110270 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jan 29 16:28:36.112812 kernel: devtmpfs: initialized Jan 29 16:28:36.112833 kernel: x86/mm: Memory block size: 128MB Jan 29 16:28:36.112909 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 16:28:36.112924 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 16:28:36.112940 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 16:28:36.112955 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 16:28:36.112970 kernel: audit: initializing netlink subsys (disabled) Jan 29 16:28:36.112986 kernel: audit: type=2000 audit(1738168115.188:1): state=initialized audit_enabled=0 res=1 Jan 29 16:28:36.113001 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 16:28:36.113016 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 16:28:36.113030 kernel: cpuidle: using governor menu Jan 29 16:28:36.113049 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 16:28:36.113064 kernel: dca service started, version 1.12.1 Jan 29 16:28:36.113090 kernel: PCI: Using configuration type 1 for base access Jan 29 16:28:36.113106 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 16:28:36.113122 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 16:28:36.113137 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 16:28:36.113153 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 16:28:36.113168 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 16:28:36.113183 kernel: ACPI: Added _OSI(Module Device) Jan 29 16:28:36.113202 kernel: ACPI: Added _OSI(Processor Device) Jan 29 16:28:36.113217 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 16:28:36.113232 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 16:28:36.113247 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 29 16:28:36.113262 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 16:28:36.113278 kernel: ACPI: Interpreter enabled Jan 29 16:28:36.113294 kernel: ACPI: PM: (supports S0 S5) Jan 29 16:28:36.113309 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 16:28:36.113325 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 16:28:36.113344 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 16:28:36.113360 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 29 16:28:36.113376 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 16:28:36.113832 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 29 16:28:36.113986 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 29 16:28:36.114131 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 29 16:28:36.114150 kernel: acpiphp: Slot [3] registered Jan 29 16:28:36.114171 kernel: acpiphp: Slot [4] registered Jan 29 16:28:36.114185 kernel: acpiphp: Slot [5] registered Jan 29 16:28:36.114200 kernel: acpiphp: Slot [6] registered Jan 29 16:28:36.114214 kernel: acpiphp: Slot [7] registered Jan 29 16:28:36.114229 kernel: acpiphp: Slot [8] registered Jan 29 16:28:36.114243 kernel: acpiphp: Slot [9] registered Jan 29 16:28:36.114258 kernel: acpiphp: Slot [10] registered Jan 29 16:28:36.114273 kernel: acpiphp: Slot [11] registered Jan 29 16:28:36.114288 kernel: acpiphp: Slot [12] registered Jan 29 16:28:36.114305 kernel: acpiphp: Slot [13] registered Jan 29 16:28:36.114319 kernel: acpiphp: Slot [14] registered Jan 29 16:28:36.114334 kernel: acpiphp: Slot [15] registered Jan 29 16:28:36.114348 kernel: acpiphp: Slot [16] registered Jan 29 16:28:36.114362 kernel: acpiphp: Slot [17] registered Jan 29 16:28:36.114377 kernel: acpiphp: Slot [18] registered Jan 29 16:28:36.114391 kernel: acpiphp: Slot [19] registered Jan 29 16:28:36.114405 kernel: acpiphp: Slot [20] registered Jan 29 16:28:36.114420 kernel: acpiphp: Slot [21] registered Jan 29 16:28:36.114434 kernel: acpiphp: Slot [22] registered Jan 29 16:28:36.114451 kernel: acpiphp: Slot [23] registered Jan 29 16:28:36.114465 kernel: acpiphp: Slot [24] registered Jan 29 16:28:36.114479 kernel: acpiphp: Slot [25] registered Jan 29 16:28:36.114494 kernel: acpiphp: Slot [26] registered Jan 29 16:28:36.114508 kernel: acpiphp: Slot [27] registered Jan 29 16:28:36.114522 kernel: acpiphp: Slot [28] registered Jan 29 16:28:36.114537 kernel: acpiphp: Slot [29] registered Jan 29 16:28:36.114551 kernel: acpiphp: Slot [30] registered Jan 29 16:28:36.114566 kernel: acpiphp: Slot [31] registered Jan 29 16:28:36.114583 kernel: PCI host bridge to bus 0000:00 Jan 29 16:28:36.114711 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 16:28:36.114827 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 16:28:36.114950 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 16:28:36.115063 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 29 16:28:36.115191 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 16:28:36.115340 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 29 16:28:36.115504 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 29 16:28:36.115716 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 29 16:28:36.115863 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 29 16:28:36.116004 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jan 29 16:28:36.116218 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 29 16:28:36.116360 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 29 16:28:36.116493 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 29 16:28:36.116742 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 29 16:28:36.116886 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 29 16:28:36.117443 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 29 16:28:36.117795 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 29 16:28:36.118241 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jan 29 16:28:36.118409 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 29 16:28:36.118548 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 16:28:36.118701 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 29 16:28:36.119335 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jan 29 16:28:36.119538 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 29 16:28:36.120008 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jan 29 16:28:36.120037 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 16:28:36.120101 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 16:28:36.120119 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 16:28:36.120141 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 16:28:36.120200 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 29 16:28:36.120218 kernel: iommu: Default domain type: Translated Jan 29 16:28:36.120235 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 16:28:36.120252 kernel: PCI: Using ACPI for IRQ routing Jan 29 16:28:36.120267 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 16:28:36.120284 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 16:28:36.120299 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jan 29 16:28:36.120457 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 29 16:28:36.120700 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 29 16:28:36.120998 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 16:28:36.121022 kernel: vgaarb: loaded Jan 29 16:28:36.121037 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 29 16:28:36.121049 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 29 16:28:36.121063 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 16:28:36.121243 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 16:28:36.121260 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 16:28:36.121279 kernel: pnp: PnP ACPI init Jan 29 16:28:36.121293 kernel: pnp: PnP ACPI: found 5 devices Jan 29 16:28:36.122905 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 16:28:36.122981 kernel: NET: Registered PF_INET protocol family Jan 29 16:28:36.123002 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 16:28:36.123017 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 29 16:28:36.123158 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 16:28:36.123178 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 16:28:36.123191 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 29 16:28:36.123208 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 29 16:28:36.123222 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 16:28:36.123235 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 16:28:36.123249 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 16:28:36.123262 kernel: NET: Registered PF_XDP protocol family Jan 29 16:28:36.123606 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 16:28:36.123749 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 16:28:36.123873 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 16:28:36.123997 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 29 16:28:36.124247 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 29 16:28:36.124269 kernel: PCI: CLS 0 bytes, default 64 Jan 29 16:28:36.124352 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 29 16:28:36.124372 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 29 16:28:36.124387 kernel: clocksource: Switched to clocksource tsc Jan 29 16:28:36.124401 kernel: Initialise system trusted keyrings Jan 29 16:28:36.124415 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 29 16:28:36.124434 kernel: Key type asymmetric registered Jan 29 16:28:36.124460 kernel: Asymmetric key parser 'x509' registered Jan 29 16:28:36.124492 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 16:28:36.124516 kernel: io scheduler mq-deadline registered Jan 29 16:28:36.124530 kernel: io scheduler kyber registered Jan 29 16:28:36.124545 kernel: io scheduler bfq registered Jan 29 16:28:36.124561 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 16:28:36.124576 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 16:28:36.124644 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 16:28:36.124662 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 16:28:36.124677 kernel: i8042: Warning: Keylock active Jan 29 16:28:36.124722 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 16:28:36.124738 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 16:28:36.125004 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 29 16:28:36.125258 kernel: rtc_cmos 00:00: registered as rtc0 Jan 29 16:28:36.125562 kernel: rtc_cmos 00:00: setting system clock to 2025-01-29T16:28:35 UTC (1738168115) Jan 29 16:28:36.125699 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 29 16:28:36.125725 kernel: intel_pstate: CPU model not supported Jan 29 16:28:36.125741 kernel: NET: Registered PF_INET6 protocol family Jan 29 16:28:36.125757 kernel: Segment Routing with IPv6 Jan 29 16:28:36.125773 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 16:28:36.125789 kernel: NET: Registered PF_PACKET protocol family Jan 29 16:28:36.125805 kernel: Key type dns_resolver registered Jan 29 16:28:36.125821 kernel: IPI shorthand broadcast: enabled Jan 29 16:28:36.125836 kernel: sched_clock: Marking stable (533069788, 217511106)->(839521282, -88940388) Jan 29 16:28:36.125850 kernel: registered taskstats version 1 Jan 29 16:28:36.125868 kernel: Loading compiled-in X.509 certificates Jan 29 16:28:36.125884 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 68134fdf6dac3690da6e3bc9c22b042a5c364340' Jan 29 16:28:36.125899 kernel: Key type .fscrypt registered Jan 29 16:28:36.125914 kernel: Key type fscrypt-provisioning registered Jan 29 16:28:36.125930 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 16:28:36.125945 kernel: ima: Allocated hash algorithm: sha1 Jan 29 16:28:36.125960 kernel: ima: No architecture policies found Jan 29 16:28:36.125976 kernel: clk: Disabling unused clocks Jan 29 16:28:36.125991 kernel: Freeing unused kernel image (initmem) memory: 43472K Jan 29 16:28:36.126010 kernel: Write protecting the kernel read-only data: 38912k Jan 29 16:28:36.126026 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K Jan 29 16:28:36.126041 kernel: Run /init as init process Jan 29 16:28:36.126055 kernel: with arguments: Jan 29 16:28:36.126083 kernel: /init Jan 29 16:28:36.126098 kernel: with environment: Jan 29 16:28:36.126113 kernel: HOME=/ Jan 29 16:28:36.126128 kernel: TERM=linux Jan 29 16:28:36.126141 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 16:28:36.126162 systemd[1]: Successfully made /usr/ read-only. Jan 29 16:28:36.126194 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:28:36.126214 systemd[1]: Detected virtualization amazon. Jan 29 16:28:36.126228 systemd[1]: Detected architecture x86-64. Jan 29 16:28:36.126244 systemd[1]: Running in initrd. Jan 29 16:28:36.126263 systemd[1]: No hostname configured, using default hostname. Jan 29 16:28:36.126600 systemd[1]: Hostname set to . Jan 29 16:28:36.126625 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:28:36.126643 systemd[1]: Queued start job for default target initrd.target. Jan 29 16:28:36.126661 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:28:36.126680 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:28:36.126701 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 16:28:36.126719 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:28:36.126741 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 16:28:36.126762 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 16:28:36.126783 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 16:28:36.126801 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 16:28:36.126820 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:28:36.126839 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:28:36.126860 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:28:36.126878 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:28:36.126897 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:28:36.126943 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:28:36.126962 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:28:36.126980 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:28:36.126999 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 16:28:36.127017 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 29 16:28:36.127035 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:28:36.127055 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:28:36.127092 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:28:36.127106 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:28:36.127121 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 16:28:36.127140 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 16:28:36.127159 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:28:36.127174 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 16:28:36.127193 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 16:28:36.128737 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:28:36.128756 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:28:36.128772 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:28:36.128826 systemd-journald[179]: Collecting audit messages is disabled. Jan 29 16:28:36.128867 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 16:28:36.128883 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:28:36.128902 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 16:28:36.128919 systemd-journald[179]: Journal started Jan 29 16:28:36.128955 systemd-journald[179]: Runtime Journal (/run/log/journal/ec280a9326185b621f9b9f2825dcc059) is 4.8M, max 38.5M, 33.7M free. Jan 29 16:28:36.124948 systemd-modules-load[180]: Inserted module 'overlay' Jan 29 16:28:36.147101 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:28:36.147185 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:28:36.156730 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:28:36.194100 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 16:28:36.197609 systemd-modules-load[180]: Inserted module 'br_netfilter' Jan 29 16:28:36.297581 kernel: Bridge firewalling registered Jan 29 16:28:36.205416 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:28:36.310290 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:28:36.321139 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:28:36.324161 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:28:36.324792 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:28:36.339328 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:28:36.348847 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:28:36.354592 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:28:36.363294 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:28:36.365358 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:28:36.383346 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 16:28:36.388290 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:28:36.402782 dracut-cmdline[214]: dracut-dracut-053 Jan 29 16:28:36.407614 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:28:36.458453 systemd-resolved[215]: Positive Trust Anchors: Jan 29 16:28:36.458473 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:28:36.458538 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:28:36.463273 systemd-resolved[215]: Defaulting to hostname 'linux'. Jan 29 16:28:36.464807 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:28:36.466267 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:28:36.538321 kernel: SCSI subsystem initialized Jan 29 16:28:36.553103 kernel: Loading iSCSI transport class v2.0-870. Jan 29 16:28:36.570374 kernel: iscsi: registered transport (tcp) Jan 29 16:28:36.616257 kernel: iscsi: registered transport (qla4xxx) Jan 29 16:28:36.616362 kernel: QLogic iSCSI HBA Driver Jan 29 16:28:36.675685 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 16:28:36.690339 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 16:28:36.723280 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 16:28:36.723622 kernel: device-mapper: uevent: version 1.0.3 Jan 29 16:28:36.724712 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 16:28:36.776210 kernel: raid6: avx512x4 gen() 15209 MB/s Jan 29 16:28:36.794398 kernel: raid6: avx512x2 gen() 8263 MB/s Jan 29 16:28:36.811186 kernel: raid6: avx512x1 gen() 13198 MB/s Jan 29 16:28:36.829119 kernel: raid6: avx2x4 gen() 11317 MB/s Jan 29 16:28:36.846115 kernel: raid6: avx2x2 gen() 8132 MB/s Jan 29 16:28:36.863917 kernel: raid6: avx2x1 gen() 8078 MB/s Jan 29 16:28:36.863993 kernel: raid6: using algorithm avx512x4 gen() 15209 MB/s Jan 29 16:28:36.881908 kernel: raid6: .... xor() 4493 MB/s, rmw enabled Jan 29 16:28:36.881987 kernel: raid6: using avx512x2 recovery algorithm Jan 29 16:28:36.921256 kernel: xor: automatically using best checksumming function avx Jan 29 16:28:37.180140 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 16:28:37.208854 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:28:37.219590 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:28:37.260836 systemd-udevd[398]: Using default interface naming scheme 'v255'. Jan 29 16:28:37.267183 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:28:37.279428 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 16:28:37.301155 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Jan 29 16:28:37.339105 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:28:37.345270 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:28:37.412736 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:28:37.423528 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 16:28:37.455576 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 16:28:37.458313 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:28:37.462888 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:28:37.464337 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:28:37.472448 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 16:28:37.509847 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:28:37.514030 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 29 16:28:37.547767 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 29 16:28:37.548545 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 29 16:28:37.548802 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:41:f9:c4:fa:5b Jan 29 16:28:37.557116 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 16:28:37.562108 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 29 16:28:37.564272 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 29 16:28:37.571597 (udev-worker)[459]: Network interface NamePolicy= disabled on kernel command line. Jan 29 16:28:37.578089 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 29 16:28:37.587261 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:28:37.592010 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 16:28:37.592049 kernel: GPT:9289727 != 16777215 Jan 29 16:28:37.592148 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 16:28:37.592173 kernel: GPT:9289727 != 16777215 Jan 29 16:28:37.592190 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 16:28:37.592206 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 16:28:37.587714 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:28:37.594293 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:28:37.596820 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:28:37.597053 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:28:37.629287 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:28:37.640592 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:28:37.642455 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 16:28:37.642491 kernel: AES CTR mode by8 optimization enabled Jan 29 16:28:37.645047 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:28:37.808093 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (448) Jan 29 16:28:37.822108 kernel: BTRFS: device fsid b756ea5d-2d08-456f-8231-a684aa2555c3 devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (449) Jan 29 16:28:37.872577 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:28:37.886790 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:28:37.965638 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:28:37.994121 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 29 16:28:38.077241 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 29 16:28:38.077966 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 29 16:28:38.109598 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 29 16:28:38.147817 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 29 16:28:38.166413 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 16:28:38.207407 disk-uuid[632]: Primary Header is updated. Jan 29 16:28:38.207407 disk-uuid[632]: Secondary Entries is updated. Jan 29 16:28:38.207407 disk-uuid[632]: Secondary Header is updated. Jan 29 16:28:38.213097 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 16:28:38.219093 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 16:28:39.223088 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 16:28:39.223435 disk-uuid[633]: The operation has completed successfully. Jan 29 16:28:39.372961 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 16:28:39.373100 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 16:28:39.428290 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 16:28:39.446231 sh[891]: Success Jan 29 16:28:39.474103 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 29 16:28:39.579312 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 16:28:39.589395 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 16:28:39.596199 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 16:28:39.640632 kernel: BTRFS info (device dm-0): first mount of filesystem b756ea5d-2d08-456f-8231-a684aa2555c3 Jan 29 16:28:39.640698 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:28:39.640718 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 16:28:39.642487 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 16:28:39.642512 kernel: BTRFS info (device dm-0): using free space tree Jan 29 16:28:39.672103 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 16:28:39.688527 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 16:28:39.689327 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 16:28:39.698343 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 16:28:39.704900 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 16:28:39.782185 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:28:39.782265 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:28:39.782288 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 16:28:39.795143 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 16:28:39.812100 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:28:39.812363 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 16:28:39.823677 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 16:28:39.835341 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 16:28:39.931298 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:28:39.940425 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:28:40.007860 systemd-networkd[1085]: lo: Link UP Jan 29 16:28:40.007871 systemd-networkd[1085]: lo: Gained carrier Jan 29 16:28:40.011046 systemd-networkd[1085]: Enumeration completed Jan 29 16:28:40.011378 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:28:40.012059 systemd-networkd[1085]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:28:40.012303 systemd-networkd[1085]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:28:40.030614 systemd[1]: Reached target network.target - Network. Jan 29 16:28:40.033708 systemd-networkd[1085]: eth0: Link UP Jan 29 16:28:40.033715 systemd-networkd[1085]: eth0: Gained carrier Jan 29 16:28:40.033732 systemd-networkd[1085]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:28:40.070237 systemd-networkd[1085]: eth0: DHCPv4 address 172.31.31.0/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 29 16:28:40.124134 ignition[1015]: Ignition 2.20.0 Jan 29 16:28:40.124210 ignition[1015]: Stage: fetch-offline Jan 29 16:28:40.124465 ignition[1015]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:28:40.124477 ignition[1015]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 16:28:40.129427 ignition[1015]: Ignition finished successfully Jan 29 16:28:40.131737 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:28:40.138375 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 16:28:40.173272 ignition[1095]: Ignition 2.20.0 Jan 29 16:28:40.173393 ignition[1095]: Stage: fetch Jan 29 16:28:40.173842 ignition[1095]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:28:40.174060 ignition[1095]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 16:28:40.174251 ignition[1095]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 16:28:40.215304 ignition[1095]: PUT result: OK Jan 29 16:28:40.218359 ignition[1095]: parsed url from cmdline: "" Jan 29 16:28:40.218370 ignition[1095]: no config URL provided Jan 29 16:28:40.218380 ignition[1095]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:28:40.218396 ignition[1095]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:28:40.218422 ignition[1095]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 16:28:40.223760 ignition[1095]: PUT result: OK Jan 29 16:28:40.225806 ignition[1095]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 29 16:28:40.229503 ignition[1095]: GET result: OK Jan 29 16:28:40.230021 ignition[1095]: parsing config with SHA512: edccee98ab19850be172debfe8ba5d94c70224fec40f6e068ca9e34d5b6d476db1e82f9d568d0cd028431d77eec952937e5e4913778b93be25021fb6a08f1ae2 Jan 29 16:28:40.235883 unknown[1095]: fetched base config from "system" Jan 29 16:28:40.235903 unknown[1095]: fetched base config from "system" Jan 29 16:28:40.236251 ignition[1095]: fetch: fetch complete Jan 29 16:28:40.235911 unknown[1095]: fetched user config from "aws" Jan 29 16:28:40.236258 ignition[1095]: fetch: fetch passed Jan 29 16:28:40.240349 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 16:28:40.236316 ignition[1095]: Ignition finished successfully Jan 29 16:28:40.250642 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 16:28:40.280820 ignition[1102]: Ignition 2.20.0 Jan 29 16:28:40.280835 ignition[1102]: Stage: kargs Jan 29 16:28:40.281518 ignition[1102]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:28:40.281540 ignition[1102]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 16:28:40.281661 ignition[1102]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 16:28:40.282905 ignition[1102]: PUT result: OK Jan 29 16:28:40.296000 ignition[1102]: kargs: kargs passed Jan 29 16:28:40.296089 ignition[1102]: Ignition finished successfully Jan 29 16:28:40.299761 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 16:28:40.307464 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 16:28:40.339320 ignition[1108]: Ignition 2.20.0 Jan 29 16:28:40.339335 ignition[1108]: Stage: disks Jan 29 16:28:40.339753 ignition[1108]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:28:40.339767 ignition[1108]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 16:28:40.339880 ignition[1108]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 16:28:40.341290 ignition[1108]: PUT result: OK Jan 29 16:28:40.347622 ignition[1108]: disks: disks passed Jan 29 16:28:40.347690 ignition[1108]: Ignition finished successfully Jan 29 16:28:40.350396 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 16:28:40.350784 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 16:28:40.354368 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 16:28:40.357116 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:28:40.360078 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:28:40.364230 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:28:40.377355 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 16:28:40.416149 systemd-fsck[1117]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 16:28:40.419206 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 16:28:40.649249 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 16:28:40.779231 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 93ea9bb6-d6ba-4a18-a828-f0002683a7b4 r/w with ordered data mode. Quota mode: none. Jan 29 16:28:40.780037 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 16:28:40.783346 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 16:28:40.792222 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:28:40.802337 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 16:28:40.803137 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 16:28:40.803208 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 16:28:40.803244 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:28:40.821216 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 16:28:40.833862 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 16:28:40.836238 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1136) Jan 29 16:28:40.840553 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:28:40.840623 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:28:40.840704 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 16:28:40.846095 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 16:28:40.848151 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:28:41.001053 initrd-setup-root[1160]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 16:28:41.020946 initrd-setup-root[1167]: cut: /sysroot/etc/group: No such file or directory Jan 29 16:28:41.028093 initrd-setup-root[1174]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 16:28:41.039981 initrd-setup-root[1181]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 16:28:41.246653 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 16:28:41.252213 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 16:28:41.259271 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 16:28:41.266101 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:28:41.301104 ignition[1248]: INFO : Ignition 2.20.0 Jan 29 16:28:41.301104 ignition[1248]: INFO : Stage: mount Jan 29 16:28:41.303459 ignition[1248]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:28:41.303459 ignition[1248]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 16:28:41.303459 ignition[1248]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 16:28:41.308982 ignition[1248]: INFO : PUT result: OK Jan 29 16:28:41.312881 ignition[1248]: INFO : mount: mount passed Jan 29 16:28:41.315421 ignition[1248]: INFO : Ignition finished successfully Jan 29 16:28:41.318006 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 16:28:41.332274 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 16:28:41.358851 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 16:28:41.639783 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 16:28:41.654391 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:28:41.705214 systemd-networkd[1085]: eth0: Gained IPv6LL Jan 29 16:28:41.710089 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1261) Jan 29 16:28:41.710149 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:28:41.712053 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:28:41.712118 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 16:28:41.720089 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 16:28:41.723547 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:28:41.760316 ignition[1278]: INFO : Ignition 2.20.0 Jan 29 16:28:41.760316 ignition[1278]: INFO : Stage: files Jan 29 16:28:41.762607 ignition[1278]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:28:41.762607 ignition[1278]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 16:28:41.765658 ignition[1278]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 16:28:41.767745 ignition[1278]: INFO : PUT result: OK Jan 29 16:28:41.771406 ignition[1278]: DEBUG : files: compiled without relabeling support, skipping Jan 29 16:28:41.773440 ignition[1278]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 16:28:41.773440 ignition[1278]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 16:28:41.781425 ignition[1278]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 16:28:41.784963 ignition[1278]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 16:28:41.787602 unknown[1278]: wrote ssh authorized keys file for user: core Jan 29 16:28:41.789626 ignition[1278]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 16:28:41.804104 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 29 16:28:41.804104 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 16:28:41.809140 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:28:41.809140 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:28:41.809140 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 16:28:41.809140 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 16:28:41.809140 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 16:28:41.809140 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 29 16:28:42.317219 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 29 16:28:42.734853 ignition[1278]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 16:28:42.737786 ignition[1278]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:28:42.739913 ignition[1278]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:28:42.739913 ignition[1278]: INFO : files: files passed Jan 29 16:28:42.744501 ignition[1278]: INFO : Ignition finished successfully Jan 29 16:28:42.742049 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 16:28:42.751436 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 16:28:42.755451 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 16:28:42.770799 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 16:28:42.774313 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 16:28:42.781360 initrd-setup-root-after-ignition[1306]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:28:42.781360 initrd-setup-root-after-ignition[1306]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:28:42.786200 initrd-setup-root-after-ignition[1310]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:28:42.794538 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:28:42.798659 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 16:28:42.806530 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 16:28:42.852160 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 16:28:42.852357 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 16:28:42.855563 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 16:28:42.861182 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 16:28:42.864008 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 16:28:42.873645 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 16:28:42.890134 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:28:42.903509 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 16:28:42.918707 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:28:42.921510 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:28:42.926102 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 16:28:42.928284 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 16:28:42.928467 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:28:42.934505 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 16:28:42.944762 systemd[1]: Stopped target basic.target - Basic System. Jan 29 16:28:42.948011 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 16:28:42.948266 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:28:42.955451 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 16:28:42.958748 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 16:28:42.961585 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:28:42.961948 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 16:28:42.967549 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 16:28:42.970775 systemd[1]: Stopped target swap.target - Swaps. Jan 29 16:28:42.980601 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 16:28:42.983865 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:28:42.988260 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:28:42.990138 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:28:42.994856 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 16:28:42.996833 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:28:42.998691 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 16:28:42.999144 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 16:28:43.005760 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 16:28:43.006998 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:28:43.008941 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 16:28:43.009118 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 16:28:43.020534 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 16:28:43.022117 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 16:28:43.022493 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:28:43.039911 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 16:28:43.046178 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 16:28:43.046503 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:28:43.048825 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 16:28:43.049220 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:28:43.075090 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 16:28:43.077211 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 16:28:43.101198 ignition[1331]: INFO : Ignition 2.20.0 Jan 29 16:28:43.105108 ignition[1331]: INFO : Stage: umount Jan 29 16:28:43.105108 ignition[1331]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:28:43.105108 ignition[1331]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 16:28:43.105108 ignition[1331]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 16:28:43.113725 ignition[1331]: INFO : PUT result: OK Jan 29 16:28:43.117221 ignition[1331]: INFO : umount: umount passed Jan 29 16:28:43.117221 ignition[1331]: INFO : Ignition finished successfully Jan 29 16:28:43.119701 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 16:28:43.120255 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 16:28:43.124283 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 16:28:43.124867 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 16:28:43.128742 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 16:28:43.128826 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 16:28:43.130163 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 16:28:43.130234 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 16:28:43.133402 systemd[1]: Stopped target network.target - Network. Jan 29 16:28:43.136822 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 16:28:43.139345 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:28:43.142793 systemd[1]: Stopped target paths.target - Path Units. Jan 29 16:28:43.143802 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 16:28:43.147942 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:28:43.151152 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 16:28:43.151363 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 16:28:43.166673 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 16:28:43.166740 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:28:43.173170 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 16:28:43.176902 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:28:43.188733 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 16:28:43.188834 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 16:28:43.197296 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 16:28:43.198437 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 16:28:43.204882 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 16:28:43.205110 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 16:28:43.216165 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 16:28:43.222634 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 16:28:43.222788 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 16:28:43.228140 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 29 16:28:43.230122 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 16:28:43.231591 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 16:28:43.236707 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 29 16:28:43.240528 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 16:28:43.240597 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:28:43.250280 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 16:28:43.252012 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 16:28:43.252177 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:28:43.253946 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:28:43.254014 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:28:43.258017 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 16:28:43.259386 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 16:28:43.261840 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 16:28:43.261929 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:28:43.265766 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:28:43.276789 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 29 16:28:43.276871 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:28:43.306803 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 16:28:43.306944 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 16:28:43.313899 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 16:28:43.313967 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 16:28:43.324123 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 16:28:43.324270 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 16:28:43.338830 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 16:28:43.340007 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:28:43.345020 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 16:28:43.345105 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 16:28:43.349190 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 16:28:43.349268 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:28:43.353842 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 16:28:43.354981 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:28:43.359250 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 16:28:43.361061 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 16:28:43.364309 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:28:43.364395 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:28:43.376270 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 16:28:43.377837 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 16:28:43.377915 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:28:43.386382 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:28:43.386453 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:28:43.390036 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 29 16:28:43.390117 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:28:43.392646 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 16:28:43.392773 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 16:28:43.402251 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 16:28:43.413533 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 16:28:43.431125 systemd[1]: Switching root. Jan 29 16:28:43.488947 systemd-journald[179]: Journal stopped Jan 29 16:28:45.551797 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Jan 29 16:28:45.551911 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 16:28:45.551936 kernel: SELinux: policy capability open_perms=1 Jan 29 16:28:45.551955 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 16:28:45.551976 kernel: SELinux: policy capability always_check_network=0 Jan 29 16:28:45.552002 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 16:28:45.552029 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 16:28:45.552049 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 16:28:45.555113 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 16:28:45.555156 kernel: audit: type=1403 audit(1738168123.787:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 16:28:45.555180 systemd[1]: Successfully loaded SELinux policy in 41.823ms. Jan 29 16:28:45.555212 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 32.727ms. Jan 29 16:28:45.555245 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:28:45.555267 systemd[1]: Detected virtualization amazon. Jan 29 16:28:45.555289 systemd[1]: Detected architecture x86-64. Jan 29 16:28:45.555311 systemd[1]: Detected first boot. Jan 29 16:28:45.555337 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:28:45.555363 zram_generator::config[1376]: No configuration found. Jan 29 16:28:45.555389 kernel: Guest personality initialized and is inactive Jan 29 16:28:45.555409 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jan 29 16:28:45.555429 kernel: Initialized host personality Jan 29 16:28:45.555541 kernel: NET: Registered PF_VSOCK protocol family Jan 29 16:28:45.555567 systemd[1]: Populated /etc with preset unit settings. Jan 29 16:28:45.555591 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 29 16:28:45.555619 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 16:28:45.555641 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 16:28:45.555664 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 16:28:45.555689 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 16:28:45.555712 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 16:28:45.555733 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 16:28:45.555754 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 16:28:45.555776 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 16:28:45.555797 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 16:28:45.555819 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 16:28:45.555841 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 16:28:45.555862 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:28:45.555887 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:28:45.555915 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 16:28:45.555937 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 16:28:45.555959 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 16:28:45.555981 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:28:45.556002 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 16:28:45.556024 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:28:45.556048 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 16:28:45.557116 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 16:28:45.557156 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 16:28:45.557183 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 16:28:45.557209 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:28:45.557234 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:28:45.557258 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:28:45.557282 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:28:45.557306 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 16:28:45.557338 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 16:28:45.557363 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 29 16:28:45.557387 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:28:45.557411 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:28:45.557434 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:28:45.557458 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 16:28:45.557485 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 16:28:45.557512 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 16:28:45.557536 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 16:28:45.557561 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:28:45.557589 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 16:28:45.557614 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 16:28:45.557638 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 16:28:45.557660 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 16:28:45.557688 systemd[1]: Reached target machines.target - Containers. Jan 29 16:28:45.557713 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 16:28:45.557736 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:28:45.557761 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:28:45.557789 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 16:28:45.557815 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:28:45.557838 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:28:45.557862 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:28:45.557886 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 16:28:45.557911 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:28:45.557937 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 16:28:45.557962 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 16:28:45.557990 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 16:28:45.558015 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 16:28:45.558041 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 16:28:45.561134 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:28:45.561179 kernel: loop: module loaded Jan 29 16:28:45.561201 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:28:45.561220 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:28:45.561238 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 16:28:45.561257 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 16:28:45.561281 kernel: fuse: init (API version 7.39) Jan 29 16:28:45.561300 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 29 16:28:45.561666 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:28:45.561697 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 16:28:45.561716 systemd[1]: Stopped verity-setup.service. Jan 29 16:28:45.561736 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:28:45.561761 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 16:28:45.561780 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 16:28:45.561799 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 16:28:45.561818 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 16:28:45.561838 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 16:28:45.561861 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 16:28:45.561881 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:28:45.561900 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 16:28:45.561920 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 16:28:45.561938 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:28:45.561958 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:28:45.561978 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:28:45.561997 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:28:45.562019 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 16:28:45.562038 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 16:28:45.562056 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:28:45.562300 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:28:45.562320 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 16:28:45.562341 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:28:45.562361 kernel: ACPI: bus type drm_connector registered Jan 29 16:28:45.562677 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 16:28:45.562707 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 16:28:45.562726 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 16:28:45.562744 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:28:45.562764 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 29 16:28:45.562790 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 16:28:45.562862 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 16:28:45.562883 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:28:45.565920 systemd-journald[1459]: Collecting audit messages is disabled. Jan 29 16:28:45.565977 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 16:28:45.566001 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:28:45.566022 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 16:28:45.566043 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:28:45.566089 systemd-journald[1459]: Journal started Jan 29 16:28:45.566130 systemd-journald[1459]: Runtime Journal (/run/log/journal/ec280a9326185b621f9b9f2825dcc059) is 4.8M, max 38.5M, 33.7M free. Jan 29 16:28:44.996086 systemd[1]: Queued start job for default target multi-user.target. Jan 29 16:28:45.005313 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 29 16:28:45.005908 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 16:28:45.584388 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:28:45.584455 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 16:28:45.591796 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:28:45.597183 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 16:28:45.599761 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:28:45.601357 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:28:45.603334 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 16:28:45.605447 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 29 16:28:45.607416 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 16:28:45.609262 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 16:28:45.611225 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 16:28:45.698691 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 16:28:45.708342 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 16:28:45.718327 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 16:28:45.722530 kernel: loop0: detected capacity change from 0 to 205544 Jan 29 16:28:45.723488 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 16:28:45.727286 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 16:28:45.735330 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 29 16:28:45.758727 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:28:45.768990 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 16:28:45.773045 systemd-journald[1459]: Time spent on flushing to /var/log/journal/ec280a9326185b621f9b9f2825dcc059 is 99.185ms for 964 entries. Jan 29 16:28:45.773045 systemd-journald[1459]: System Journal (/var/log/journal/ec280a9326185b621f9b9f2825dcc059) is 8M, max 195.6M, 187.6M free. Jan 29 16:28:45.884832 systemd-journald[1459]: Received client request to flush runtime journal. Jan 29 16:28:45.794282 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:28:45.812421 udevadm[1518]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 16:28:45.834588 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 16:28:45.868742 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:28:45.880566 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 29 16:28:45.888159 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 16:28:45.918265 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 16:28:45.963975 systemd-tmpfiles[1522]: ACLs are not supported, ignoring. Jan 29 16:28:45.964005 systemd-tmpfiles[1522]: ACLs are not supported, ignoring. Jan 29 16:28:45.972092 kernel: loop1: detected capacity change from 0 to 62832 Jan 29 16:28:45.980989 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:28:46.008846 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 16:28:46.054176 kernel: loop2: detected capacity change from 0 to 147912 Jan 29 16:28:46.189597 kernel: loop3: detected capacity change from 0 to 138176 Jan 29 16:28:46.289684 kernel: loop4: detected capacity change from 0 to 205544 Jan 29 16:28:46.362101 kernel: loop5: detected capacity change from 0 to 62832 Jan 29 16:28:46.400100 kernel: loop6: detected capacity change from 0 to 147912 Jan 29 16:28:46.445103 kernel: loop7: detected capacity change from 0 to 138176 Jan 29 16:28:46.494129 (sd-merge)[1534]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 29 16:28:46.494928 (sd-merge)[1534]: Merged extensions into '/usr'. Jan 29 16:28:46.507679 systemd[1]: Reload requested from client PID 1489 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 16:28:46.507699 systemd[1]: Reloading... Jan 29 16:28:46.678097 zram_generator::config[1560]: No configuration found. Jan 29 16:28:47.048020 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:28:47.225637 ldconfig[1482]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 16:28:47.244313 systemd[1]: Reloading finished in 735 ms. Jan 29 16:28:47.268470 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 16:28:47.270889 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 16:28:47.303423 systemd[1]: Starting ensure-sysext.service... Jan 29 16:28:47.308387 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:28:47.328910 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 16:28:47.348295 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:28:47.354802 systemd[1]: Reload requested from client PID 1611 ('systemctl') (unit ensure-sysext.service)... Jan 29 16:28:47.355594 systemd[1]: Reloading... Jan 29 16:28:47.372664 systemd-tmpfiles[1612]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 16:28:47.373364 systemd-tmpfiles[1612]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 16:28:47.375733 systemd-tmpfiles[1612]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 16:28:47.376297 systemd-tmpfiles[1612]: ACLs are not supported, ignoring. Jan 29 16:28:47.376397 systemd-tmpfiles[1612]: ACLs are not supported, ignoring. Jan 29 16:28:47.384910 systemd-tmpfiles[1612]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:28:47.386167 systemd-tmpfiles[1612]: Skipping /boot Jan 29 16:28:47.427023 systemd-tmpfiles[1612]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:28:47.427219 systemd-tmpfiles[1612]: Skipping /boot Jan 29 16:28:47.460360 systemd-udevd[1614]: Using default interface naming scheme 'v255'. Jan 29 16:28:47.565313 zram_generator::config[1645]: No configuration found. Jan 29 16:28:47.755520 (udev-worker)[1650]: Network interface NamePolicy= disabled on kernel command line. Jan 29 16:28:47.873483 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1651) Jan 29 16:28:47.992151 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jan 29 16:28:47.999253 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 29 16:28:48.010359 kernel: ACPI: button: Power Button [PWRF] Jan 29 16:28:48.017096 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Jan 29 16:28:48.027167 kernel: ACPI: button: Sleep Button [SLPF] Jan 29 16:28:48.064485 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:28:48.077127 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Jan 29 16:28:48.242098 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 16:28:48.302594 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 16:28:48.303422 systemd[1]: Reloading finished in 945 ms. Jan 29 16:28:48.317053 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:28:48.341482 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:28:48.384377 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 16:28:48.393373 systemd[1]: Finished ensure-sysext.service. Jan 29 16:28:48.464407 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 29 16:28:48.466637 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:28:48.472288 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:28:48.482806 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 16:28:48.485198 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:28:48.489350 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 16:28:48.500363 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:28:48.509158 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:28:48.534490 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:28:48.541296 lvm[1804]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:28:48.548475 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:28:48.550087 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:28:48.559272 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 16:28:48.561596 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:28:48.572343 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 16:28:48.586309 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:28:48.592287 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:28:48.593547 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 16:28:48.602323 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 16:28:48.633432 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:28:48.637172 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:28:48.643207 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:28:48.644217 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:28:48.646296 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:28:48.647151 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:28:48.649696 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:28:48.651150 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:28:48.653775 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:28:48.654764 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:28:48.664308 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 16:28:48.683736 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 16:28:48.686824 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:28:48.695849 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 16:28:48.697860 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:28:48.697952 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:28:48.708355 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 16:28:48.724086 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 16:28:48.739023 lvm[1836]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:28:48.766758 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 16:28:48.777651 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 16:28:48.781275 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 16:28:48.783883 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 16:28:48.788735 augenrules[1848]: No rules Jan 29 16:28:48.790818 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:28:48.791288 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:28:48.802660 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 16:28:48.812022 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 16:28:48.833696 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 16:28:49.046220 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:28:49.067055 systemd-networkd[1817]: lo: Link UP Jan 29 16:28:49.067081 systemd-networkd[1817]: lo: Gained carrier Jan 29 16:28:49.069161 systemd-networkd[1817]: Enumeration completed Jan 29 16:28:49.069620 systemd-networkd[1817]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:28:49.069635 systemd-networkd[1817]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:28:49.071190 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:28:49.076009 systemd-networkd[1817]: eth0: Link UP Jan 29 16:28:49.076361 systemd-networkd[1817]: eth0: Gained carrier Jan 29 16:28:49.076491 systemd-networkd[1817]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:28:49.078858 systemd-resolved[1818]: Positive Trust Anchors: Jan 29 16:28:49.078884 systemd-resolved[1818]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:28:49.078945 systemd-resolved[1818]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:28:49.080366 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 29 16:28:49.087875 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 16:28:49.093656 systemd-networkd[1817]: eth0: DHCPv4 address 172.31.31.0/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 29 16:28:49.102166 systemd-resolved[1818]: Defaulting to hostname 'linux'. Jan 29 16:28:49.104868 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:28:49.106834 systemd[1]: Reached target network.target - Network. Jan 29 16:28:49.111207 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:28:49.113419 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:28:49.115052 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 16:28:49.116978 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 16:28:49.119325 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 16:28:49.121011 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 16:28:49.126406 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 16:28:49.130114 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 16:28:49.130390 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:28:49.133875 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:28:49.137184 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 16:28:49.141036 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 16:28:49.146736 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 29 16:28:49.152437 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 29 16:28:49.155172 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 29 16:28:49.170106 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 16:28:49.177475 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 29 16:28:49.183552 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 29 16:28:49.191191 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 16:28:49.194663 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:28:49.196690 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:28:49.198569 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:28:49.198616 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:28:49.204273 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 16:28:49.223453 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 16:28:49.237788 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 16:28:49.242338 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 16:28:49.263321 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 16:28:49.268290 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 16:28:49.288447 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 16:28:49.309798 jq[1876]: false Jan 29 16:28:49.324332 systemd[1]: Started ntpd.service - Network Time Service. Jan 29 16:28:49.340353 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 29 16:28:49.358424 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 16:28:49.366353 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 16:28:49.384406 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 16:28:49.393045 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 16:28:49.395339 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 16:28:49.406410 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 16:28:49.415314 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 16:28:49.429377 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 16:28:49.429668 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 16:28:49.433530 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 16:28:49.433803 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 16:28:49.450134 jq[1891]: true Jan 29 16:28:49.472565 extend-filesystems[1877]: Found loop4 Jan 29 16:28:49.476561 extend-filesystems[1877]: Found loop5 Jan 29 16:28:49.476561 extend-filesystems[1877]: Found loop6 Jan 29 16:28:49.476561 extend-filesystems[1877]: Found loop7 Jan 29 16:28:49.476561 extend-filesystems[1877]: Found nvme0n1 Jan 29 16:28:49.476561 extend-filesystems[1877]: Found nvme0n1p1 Jan 29 16:28:49.476561 extend-filesystems[1877]: Found nvme0n1p2 Jan 29 16:28:49.476561 extend-filesystems[1877]: Found nvme0n1p3 Jan 29 16:28:49.476561 extend-filesystems[1877]: Found usr Jan 29 16:28:49.476561 extend-filesystems[1877]: Found nvme0n1p4 Jan 29 16:28:49.476561 extend-filesystems[1877]: Found nvme0n1p6 Jan 29 16:28:49.476561 extend-filesystems[1877]: Found nvme0n1p7 Jan 29 16:28:49.476561 extend-filesystems[1877]: Found nvme0n1p9 Jan 29 16:28:49.476561 extend-filesystems[1877]: Checking size of /dev/nvme0n1p9 Jan 29 16:28:49.491659 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 16:28:49.481918 dbus-daemon[1875]: [system] SELinux support is enabled Jan 29 16:28:49.505462 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 16:28:49.509243 dbus-daemon[1875]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1817 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 29 16:28:49.511175 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 16:28:49.544829 update_engine[1890]: I20250129 16:28:49.543525 1890 main.cc:92] Flatcar Update Engine starting Jan 29 16:28:49.555761 jq[1902]: true Jan 29 16:28:49.575788 update_engine[1890]: I20250129 16:28:49.575429 1890 update_check_scheduler.cc:74] Next update check in 3m18s Jan 29 16:28:49.601405 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 16:28:49.601627 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 16:28:49.601658 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 16:28:49.601955 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 16:28:49.601974 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 16:28:49.609946 dbus-daemon[1875]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 29 16:28:49.610053 systemd[1]: Started update-engine.service - Update Engine. Jan 29 16:28:49.618711 extend-filesystems[1877]: Resized partition /dev/nvme0n1p9 Jan 29 16:28:49.618399 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 16:28:49.632723 extend-filesystems[1923]: resize2fs 1.47.1 (20-May-2024) Jan 29 16:28:49.631639 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 29 16:28:49.636185 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 29 16:28:49.653111 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 29 16:28:49.686164 (ntainerd)[1903]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 16:28:49.710852 ntpd[1880]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 14:24:41 UTC 2025 (1): Starting Jan 29 16:28:49.713704 ntpd[1880]: 29 Jan 16:28:49 ntpd[1880]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 14:24:41 UTC 2025 (1): Starting Jan 29 16:28:49.713704 ntpd[1880]: 29 Jan 16:28:49 ntpd[1880]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 29 16:28:49.713704 ntpd[1880]: 29 Jan 16:28:49 ntpd[1880]: ---------------------------------------------------- Jan 29 16:28:49.713704 ntpd[1880]: 29 Jan 16:28:49 ntpd[1880]: ntp-4 is maintained by Network Time Foundation, Jan 29 16:28:49.713704 ntpd[1880]: 29 Jan 16:28:49 ntpd[1880]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 29 16:28:49.713704 ntpd[1880]: 29 Jan 16:28:49 ntpd[1880]: corporation. Support and training for ntp-4 are Jan 29 16:28:49.713704 ntpd[1880]: 29 Jan 16:28:49 ntpd[1880]: available at https://www.nwtime.org/support Jan 29 16:28:49.713704 ntpd[1880]: 29 Jan 16:28:49 ntpd[1880]: ---------------------------------------------------- Jan 29 16:28:49.710897 ntpd[1880]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 29 16:28:49.710917 ntpd[1880]: ---------------------------------------------------- Jan 29 16:28:49.710927 ntpd[1880]: ntp-4 is maintained by Network Time Foundation, Jan 29 16:28:49.710939 ntpd[1880]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 29 16:28:49.711038 ntpd[1880]: corporation. Support and training for ntp-4 are Jan 29 16:28:49.711048 ntpd[1880]: available at https://www.nwtime.org/support Jan 29 16:28:49.711058 ntpd[1880]: ---------------------------------------------------- Jan 29 16:28:49.724852 ntpd[1880]: proto: precision = 0.065 usec (-24) Jan 29 16:28:49.725010 ntpd[1880]: 29 Jan 16:28:49 ntpd[1880]: proto: precision = 0.065 usec (-24) Jan 29 16:28:49.725248 ntpd[1880]: basedate set to 2025-01-17 Jan 29 16:28:49.726900 ntpd[1880]: 29 Jan 16:28:49 ntpd[1880]: basedate set to 2025-01-17 Jan 29 16:28:49.726900 ntpd[1880]: 29 Jan 16:28:49 ntpd[1880]: gps base set to 2025-01-19 (week 2350) Jan 29 16:28:49.725268 ntpd[1880]: gps base set to 2025-01-19 (week 2350) Jan 29 16:28:49.744649 ntpd[1880]: Listen and drop on 0 v6wildcard [::]:123 Jan 29 16:28:49.776609 ntpd[1880]: 29 Jan 16:28:49 ntpd[1880]: Listen and drop on 0 v6wildcard [::]:123 Jan 29 16:28:49.776609 ntpd[1880]: 29 Jan 16:28:49 ntpd[1880]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 29 16:28:49.776609 ntpd[1880]: 29 Jan 16:28:49 ntpd[1880]: Listen normally on 2 lo 127.0.0.1:123 Jan 29 16:28:49.776609 ntpd[1880]: 29 Jan 16:28:49 ntpd[1880]: Listen normally on 3 eth0 172.31.31.0:123 Jan 29 16:28:49.776609 ntpd[1880]: 29 Jan 16:28:49 ntpd[1880]: Listen normally on 4 lo [::1]:123 Jan 29 16:28:49.776609 ntpd[1880]: 29 Jan 16:28:49 ntpd[1880]: bind(21) AF_INET6 fe80::441:f9ff:fec4:fa5b%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 16:28:49.776609 ntpd[1880]: 29 Jan 16:28:49 ntpd[1880]: unable to create socket on eth0 (5) for fe80::441:f9ff:fec4:fa5b%2#123 Jan 29 16:28:49.776609 ntpd[1880]: 29 Jan 16:28:49 ntpd[1880]: failed to init interface for address fe80::441:f9ff:fec4:fa5b%2 Jan 29 16:28:49.776609 ntpd[1880]: 29 Jan 16:28:49 ntpd[1880]: Listening on routing socket on fd #21 for interface updates Jan 29 16:28:49.744715 ntpd[1880]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 29 16:28:49.780747 ntpd[1880]: 29 Jan 16:28:49 ntpd[1880]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 16:28:49.780747 ntpd[1880]: 29 Jan 16:28:49 ntpd[1880]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 16:28:49.745812 ntpd[1880]: Listen normally on 2 lo 127.0.0.1:123 Jan 29 16:28:49.745862 ntpd[1880]: Listen normally on 3 eth0 172.31.31.0:123 Jan 29 16:28:49.745905 ntpd[1880]: Listen normally on 4 lo [::1]:123 Jan 29 16:28:49.747714 ntpd[1880]: bind(21) AF_INET6 fe80::441:f9ff:fec4:fa5b%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 16:28:49.747751 ntpd[1880]: unable to create socket on eth0 (5) for fe80::441:f9ff:fec4:fa5b%2#123 Jan 29 16:28:49.747769 ntpd[1880]: failed to init interface for address fe80::441:f9ff:fec4:fa5b%2 Jan 29 16:28:49.747879 ntpd[1880]: Listening on routing socket on fd #21 for interface updates Jan 29 16:28:49.780151 ntpd[1880]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 16:28:49.780191 ntpd[1880]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 16:28:49.798108 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 29 16:28:49.831087 extend-filesystems[1923]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 29 16:28:49.831087 extend-filesystems[1923]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 16:28:49.831087 extend-filesystems[1923]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 29 16:28:49.837353 extend-filesystems[1877]: Resized filesystem in /dev/nvme0n1p9 Jan 29 16:28:49.841150 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 16:28:49.841832 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 16:28:49.844093 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1659) Jan 29 16:28:49.858226 bash[1946]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:28:49.904299 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 16:28:49.923348 systemd[1]: Starting sshkeys.service... Jan 29 16:28:49.967763 coreos-metadata[1874]: Jan 29 16:28:49.960 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 29 16:28:49.970711 coreos-metadata[1874]: Jan 29 16:28:49.968 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 29 16:28:49.971823 coreos-metadata[1874]: Jan 29 16:28:49.971 INFO Fetch successful Jan 29 16:28:49.971823 coreos-metadata[1874]: Jan 29 16:28:49.971 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 29 16:28:49.983116 coreos-metadata[1874]: Jan 29 16:28:49.976 INFO Fetch successful Jan 29 16:28:49.983116 coreos-metadata[1874]: Jan 29 16:28:49.976 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 29 16:28:49.983116 coreos-metadata[1874]: Jan 29 16:28:49.977 INFO Fetch successful Jan 29 16:28:49.983116 coreos-metadata[1874]: Jan 29 16:28:49.977 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 29 16:28:49.988898 coreos-metadata[1874]: Jan 29 16:28:49.988 INFO Fetch successful Jan 29 16:28:49.988898 coreos-metadata[1874]: Jan 29 16:28:49.988 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 29 16:28:50.008875 coreos-metadata[1874]: Jan 29 16:28:50.005 INFO Fetch failed with 404: resource not found Jan 29 16:28:50.008875 coreos-metadata[1874]: Jan 29 16:28:50.006 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 29 16:28:50.008875 coreos-metadata[1874]: Jan 29 16:28:50.008 INFO Fetch successful Jan 29 16:28:50.008875 coreos-metadata[1874]: Jan 29 16:28:50.008 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 29 16:28:50.012424 coreos-metadata[1874]: Jan 29 16:28:50.009 INFO Fetch successful Jan 29 16:28:50.012424 coreos-metadata[1874]: Jan 29 16:28:50.010 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 29 16:28:50.016149 coreos-metadata[1874]: Jan 29 16:28:50.012 INFO Fetch successful Jan 29 16:28:50.016149 coreos-metadata[1874]: Jan 29 16:28:50.012 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 29 16:28:50.019477 coreos-metadata[1874]: Jan 29 16:28:50.016 INFO Fetch successful Jan 29 16:28:50.019477 coreos-metadata[1874]: Jan 29 16:28:50.016 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 29 16:28:50.019301 systemd-logind[1888]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 16:28:50.019326 systemd-logind[1888]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 29 16:28:50.019352 systemd-logind[1888]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 16:28:50.034216 coreos-metadata[1874]: Jan 29 16:28:50.026 INFO Fetch successful Jan 29 16:28:50.038942 systemd-logind[1888]: New seat seat0. Jan 29 16:28:50.047856 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 16:28:50.080891 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 16:28:50.091746 sshd_keygen[1911]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 16:28:50.094650 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 16:28:50.131701 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 29 16:28:50.149721 dbus-daemon[1875]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 29 16:28:50.153181 systemd-networkd[1817]: eth0: Gained IPv6LL Jan 29 16:28:50.155584 dbus-daemon[1875]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1925 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 29 16:28:50.175533 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 16:28:50.181483 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 16:28:50.193576 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 29 16:28:50.205478 locksmithd[1921]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 16:28:50.231851 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:28:50.276594 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 16:28:50.290610 systemd[1]: Starting polkit.service - Authorization Manager... Jan 29 16:28:50.298584 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 16:28:50.331860 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 16:28:50.361119 amazon-ssm-agent[2008]: Initializing new seelog logger Jan 29 16:28:50.361119 amazon-ssm-agent[2008]: New Seelog Logger Creation Complete Jan 29 16:28:50.361119 amazon-ssm-agent[2008]: 2025/01/29 16:28:50 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 16:28:50.361119 amazon-ssm-agent[2008]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 16:28:50.361119 amazon-ssm-agent[2008]: 2025/01/29 16:28:50 processing appconfig overrides Jan 29 16:28:50.361119 amazon-ssm-agent[2008]: 2025/01/29 16:28:50 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 16:28:50.361119 amazon-ssm-agent[2008]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 16:28:50.361119 amazon-ssm-agent[2008]: 2025/01/29 16:28:50 processing appconfig overrides Jan 29 16:28:50.361119 amazon-ssm-agent[2008]: 2025/01/29 16:28:50 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 16:28:50.361119 amazon-ssm-agent[2008]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 16:28:50.361119 amazon-ssm-agent[2008]: 2025/01/29 16:28:50 processing appconfig overrides Jan 29 16:28:50.361119 amazon-ssm-agent[2008]: 2025-01-29 16:28:50 INFO Proxy environment variables: Jan 29 16:28:50.369007 polkitd[2024]: Started polkitd version 121 Jan 29 16:28:50.431324 amazon-ssm-agent[2008]: 2025/01/29 16:28:50 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 16:28:50.431324 amazon-ssm-agent[2008]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 16:28:50.431324 amazon-ssm-agent[2008]: 2025/01/29 16:28:50 processing appconfig overrides Jan 29 16:28:50.482401 amazon-ssm-agent[2008]: 2025-01-29 16:28:50 INFO https_proxy: Jan 29 16:28:50.463595 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 16:28:50.484746 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 16:28:50.486746 polkitd[2024]: Loading rules from directory /etc/polkit-1/rules.d Jan 29 16:28:50.486829 polkitd[2024]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 29 16:28:50.498149 systemd[1]: Started sshd@0-172.31.31.0:22-139.178.68.195:42858.service - OpenSSH per-connection server daemon (139.178.68.195:42858). Jan 29 16:28:50.571407 polkitd[2024]: Finished loading, compiling and executing 2 rules Jan 29 16:28:50.575002 dbus-daemon[1875]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 29 16:28:50.581928 systemd[1]: Started polkit.service - Authorization Manager. Jan 29 16:28:50.587519 polkitd[2024]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 29 16:28:50.599089 amazon-ssm-agent[2008]: 2025-01-29 16:28:50 INFO http_proxy: Jan 29 16:28:50.685280 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 16:28:50.687938 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 16:28:50.708465 amazon-ssm-agent[2008]: 2025-01-29 16:28:50 INFO no_proxy: Jan 29 16:28:50.739285 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 16:28:50.771894 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 16:28:50.799638 systemd-hostnamed[1925]: Hostname set to (transient) Jan 29 16:28:50.799778 systemd-resolved[1818]: System hostname changed to 'ip-172-31-31-0'. Jan 29 16:28:50.813844 coreos-metadata[1979]: Jan 29 16:28:50.813 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 29 16:28:50.819744 amazon-ssm-agent[2008]: 2025-01-29 16:28:50 INFO Checking if agent identity type OnPrem can be assumed Jan 29 16:28:50.819854 coreos-metadata[1979]: Jan 29 16:28:50.819 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 29 16:28:50.821377 coreos-metadata[1979]: Jan 29 16:28:50.821 INFO Fetch successful Jan 29 16:28:50.821377 coreos-metadata[1979]: Jan 29 16:28:50.821 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 29 16:28:50.822489 coreos-metadata[1979]: Jan 29 16:28:50.822 INFO Fetch successful Jan 29 16:28:50.827585 unknown[1979]: wrote ssh authorized keys file for user: core Jan 29 16:28:50.830352 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 16:28:50.842639 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 16:28:50.853298 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 16:28:50.856729 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 16:28:50.903931 update-ssh-keys[2104]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:28:50.905642 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 16:28:50.921085 amazon-ssm-agent[2008]: 2025-01-29 16:28:50 INFO Checking if agent identity type EC2 can be assumed Jan 29 16:28:50.927016 systemd[1]: Finished sshkeys.service. Jan 29 16:28:51.019369 amazon-ssm-agent[2008]: 2025-01-29 16:28:50 INFO Agent will take identity from EC2 Jan 29 16:28:51.102269 containerd[1903]: time="2025-01-29T16:28:51.102180001Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 16:28:51.109006 sshd[2068]: Accepted publickey for core from 139.178.68.195 port 42858 ssh2: RSA SHA256:vbUkVCMrs4gTXRs1sqC0OSq8ZkfijT0hqZd53m264iQ Jan 29 16:28:51.112044 sshd-session[2068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:28:51.120097 amazon-ssm-agent[2008]: 2025-01-29 16:28:50 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 16:28:51.127658 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 16:28:51.138661 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 16:28:51.175312 systemd-logind[1888]: New session 1 of user core. Jan 29 16:28:51.190540 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 16:28:51.203587 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 16:28:51.222313 amazon-ssm-agent[2008]: 2025-01-29 16:28:50 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 16:28:51.227172 (systemd)[2113]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 16:28:51.232027 containerd[1903]: time="2025-01-29T16:28:51.231589189Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:28:51.234140 systemd-logind[1888]: New session c1 of user core. Jan 29 16:28:51.240236 containerd[1903]: time="2025-01-29T16:28:51.239032171Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:28:51.240236 containerd[1903]: time="2025-01-29T16:28:51.239102248Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 16:28:51.240236 containerd[1903]: time="2025-01-29T16:28:51.239128828Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 16:28:51.240236 containerd[1903]: time="2025-01-29T16:28:51.239314161Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 16:28:51.240236 containerd[1903]: time="2025-01-29T16:28:51.239337513Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 16:28:51.240236 containerd[1903]: time="2025-01-29T16:28:51.239410532Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:28:51.240236 containerd[1903]: time="2025-01-29T16:28:51.239428449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:28:51.240236 containerd[1903]: time="2025-01-29T16:28:51.239685566Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:28:51.240236 containerd[1903]: time="2025-01-29T16:28:51.239706890Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 16:28:51.240236 containerd[1903]: time="2025-01-29T16:28:51.239726445Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:28:51.240236 containerd[1903]: time="2025-01-29T16:28:51.239741029Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 16:28:51.240673 containerd[1903]: time="2025-01-29T16:28:51.239829578Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:28:51.240673 containerd[1903]: time="2025-01-29T16:28:51.240084157Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:28:51.240673 containerd[1903]: time="2025-01-29T16:28:51.240277089Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:28:51.240673 containerd[1903]: time="2025-01-29T16:28:51.240296721Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 16:28:51.240673 containerd[1903]: time="2025-01-29T16:28:51.240391888Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 16:28:51.240673 containerd[1903]: time="2025-01-29T16:28:51.240460970Z" level=info msg="metadata content store policy set" policy=shared Jan 29 16:28:51.255684 containerd[1903]: time="2025-01-29T16:28:51.255638774Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 16:28:51.258262 containerd[1903]: time="2025-01-29T16:28:51.258223096Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 16:28:51.259800 containerd[1903]: time="2025-01-29T16:28:51.258412428Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 16:28:51.261251 containerd[1903]: time="2025-01-29T16:28:51.261154377Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 16:28:51.261396 containerd[1903]: time="2025-01-29T16:28:51.261380156Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 16:28:51.262091 containerd[1903]: time="2025-01-29T16:28:51.261694897Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 16:28:51.274844 containerd[1903]: time="2025-01-29T16:28:51.274723805Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 16:28:51.275558 containerd[1903]: time="2025-01-29T16:28:51.274954404Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 16:28:51.275558 containerd[1903]: time="2025-01-29T16:28:51.274990443Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 16:28:51.275558 containerd[1903]: time="2025-01-29T16:28:51.275038874Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 16:28:51.275558 containerd[1903]: time="2025-01-29T16:28:51.275084236Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 16:28:51.275558 containerd[1903]: time="2025-01-29T16:28:51.275107174Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 16:28:51.275558 containerd[1903]: time="2025-01-29T16:28:51.275133527Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 16:28:51.275558 containerd[1903]: time="2025-01-29T16:28:51.275158831Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 16:28:51.275558 containerd[1903]: time="2025-01-29T16:28:51.275184392Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 16:28:51.275558 containerd[1903]: time="2025-01-29T16:28:51.275207368Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 16:28:51.275558 containerd[1903]: time="2025-01-29T16:28:51.275229636Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 16:28:51.275558 containerd[1903]: time="2025-01-29T16:28:51.275253101Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 16:28:51.275558 containerd[1903]: time="2025-01-29T16:28:51.275366022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 16:28:51.275558 containerd[1903]: time="2025-01-29T16:28:51.275397732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 16:28:51.275558 containerd[1903]: time="2025-01-29T16:28:51.275421370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 16:28:51.276189 containerd[1903]: time="2025-01-29T16:28:51.275445396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 16:28:51.276189 containerd[1903]: time="2025-01-29T16:28:51.275468684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 16:28:51.276189 containerd[1903]: time="2025-01-29T16:28:51.275493147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 16:28:51.276189 containerd[1903]: time="2025-01-29T16:28:51.275516368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 16:28:51.276189 containerd[1903]: time="2025-01-29T16:28:51.275538174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 16:28:51.276189 containerd[1903]: time="2025-01-29T16:28:51.275561670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 16:28:51.276189 containerd[1903]: time="2025-01-29T16:28:51.275587458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 16:28:51.276189 containerd[1903]: time="2025-01-29T16:28:51.275608698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 16:28:51.276189 containerd[1903]: time="2025-01-29T16:28:51.275633006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 16:28:51.276189 containerd[1903]: time="2025-01-29T16:28:51.275668796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 16:28:51.276189 containerd[1903]: time="2025-01-29T16:28:51.275698329Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 16:28:51.276189 containerd[1903]: time="2025-01-29T16:28:51.275785879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 16:28:51.276189 containerd[1903]: time="2025-01-29T16:28:51.275815380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 16:28:51.276189 containerd[1903]: time="2025-01-29T16:28:51.275839290Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 16:28:51.276854 containerd[1903]: time="2025-01-29T16:28:51.275915882Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 16:28:51.276854 containerd[1903]: time="2025-01-29T16:28:51.275949568Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 16:28:51.276854 containerd[1903]: time="2025-01-29T16:28:51.275972416Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 16:28:51.276854 containerd[1903]: time="2025-01-29T16:28:51.275997822Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 16:28:51.276854 containerd[1903]: time="2025-01-29T16:28:51.276014309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 16:28:51.276854 containerd[1903]: time="2025-01-29T16:28:51.276043321Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 16:28:51.277659 containerd[1903]: time="2025-01-29T16:28:51.277224365Z" level=info msg="NRI interface is disabled by configuration." Jan 29 16:28:51.277659 containerd[1903]: time="2025-01-29T16:28:51.277396200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 16:28:51.277840 containerd[1903]: time="2025-01-29T16:28:51.277783653Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 16:28:51.278138 containerd[1903]: time="2025-01-29T16:28:51.277845439Z" level=info msg="Connect containerd service" Jan 29 16:28:51.278138 containerd[1903]: time="2025-01-29T16:28:51.277895498Z" level=info msg="using legacy CRI server" Jan 29 16:28:51.278138 containerd[1903]: time="2025-01-29T16:28:51.277908001Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 16:28:51.280411 containerd[1903]: time="2025-01-29T16:28:51.280376049Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 16:28:51.284017 containerd[1903]: time="2025-01-29T16:28:51.281506527Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:28:51.284017 containerd[1903]: time="2025-01-29T16:28:51.282148641Z" level=info msg="Start subscribing containerd event" Jan 29 16:28:51.284017 containerd[1903]: time="2025-01-29T16:28:51.282217589Z" level=info msg="Start recovering state" Jan 29 16:28:51.284017 containerd[1903]: time="2025-01-29T16:28:51.282311540Z" level=info msg="Start event monitor" Jan 29 16:28:51.284017 containerd[1903]: time="2025-01-29T16:28:51.282332286Z" level=info msg="Start snapshots syncer" Jan 29 16:28:51.284017 containerd[1903]: time="2025-01-29T16:28:51.282347485Z" level=info msg="Start cni network conf syncer for default" Jan 29 16:28:51.284017 containerd[1903]: time="2025-01-29T16:28:51.282358151Z" level=info msg="Start streaming server" Jan 29 16:28:51.284017 containerd[1903]: time="2025-01-29T16:28:51.283244481Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 16:28:51.284017 containerd[1903]: time="2025-01-29T16:28:51.283306386Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 16:28:51.283480 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 16:28:51.289619 containerd[1903]: time="2025-01-29T16:28:51.289141203Z" level=info msg="containerd successfully booted in 0.189684s" Jan 29 16:28:51.322434 amazon-ssm-agent[2008]: 2025-01-29 16:28:50 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 16:28:51.424090 amazon-ssm-agent[2008]: 2025-01-29 16:28:50 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 29 16:28:51.522119 amazon-ssm-agent[2008]: 2025-01-29 16:28:50 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 29 16:28:51.593584 systemd[2113]: Queued start job for default target default.target. Jan 29 16:28:51.612225 systemd[2113]: Created slice app.slice - User Application Slice. Jan 29 16:28:51.612353 systemd[2113]: Reached target paths.target - Paths. Jan 29 16:28:51.612999 systemd[2113]: Reached target timers.target - Timers. Jan 29 16:28:51.623361 systemd[2113]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 16:28:51.629163 amazon-ssm-agent[2008]: 2025-01-29 16:28:50 INFO [amazon-ssm-agent] Starting Core Agent Jan 29 16:28:51.664455 systemd[2113]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 16:28:51.664622 systemd[2113]: Reached target sockets.target - Sockets. Jan 29 16:28:51.664705 systemd[2113]: Reached target basic.target - Basic System. Jan 29 16:28:51.664760 systemd[2113]: Reached target default.target - Main User Target. Jan 29 16:28:51.664802 systemd[2113]: Startup finished in 410ms. Jan 29 16:28:51.664831 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 16:28:51.683346 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 16:28:51.726998 amazon-ssm-agent[2008]: 2025-01-29 16:28:50 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 29 16:28:51.839138 amazon-ssm-agent[2008]: 2025-01-29 16:28:50 INFO [Registrar] Starting registrar module Jan 29 16:28:51.860364 systemd[1]: Started sshd@1-172.31.31.0:22-139.178.68.195:42864.service - OpenSSH per-connection server daemon (139.178.68.195:42864). Jan 29 16:28:51.934940 amazon-ssm-agent[2008]: 2025-01-29 16:28:50 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 29 16:28:52.015189 amazon-ssm-agent[2008]: 2025-01-29 16:28:51 INFO [EC2Identity] EC2 registration was successful. Jan 29 16:28:52.015189 amazon-ssm-agent[2008]: 2025-01-29 16:28:51 INFO [CredentialRefresher] credentialRefresher has started Jan 29 16:28:52.015189 amazon-ssm-agent[2008]: 2025-01-29 16:28:51 INFO [CredentialRefresher] Starting credentials refresher loop Jan 29 16:28:52.015549 amazon-ssm-agent[2008]: 2025-01-29 16:28:52 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 29 16:28:52.036116 amazon-ssm-agent[2008]: 2025-01-29 16:28:52 INFO [CredentialRefresher] Next credential rotation will be in 32.41665659233333 minutes Jan 29 16:28:52.084168 sshd[2126]: Accepted publickey for core from 139.178.68.195 port 42864 ssh2: RSA SHA256:vbUkVCMrs4gTXRs1sqC0OSq8ZkfijT0hqZd53m264iQ Jan 29 16:28:52.085903 sshd-session[2126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:28:52.098761 systemd-logind[1888]: New session 2 of user core. Jan 29 16:28:52.106830 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 16:28:52.253532 sshd[2128]: Connection closed by 139.178.68.195 port 42864 Jan 29 16:28:52.254759 sshd-session[2126]: pam_unix(sshd:session): session closed for user core Jan 29 16:28:52.261435 systemd[1]: sshd@1-172.31.31.0:22-139.178.68.195:42864.service: Deactivated successfully. Jan 29 16:28:52.264326 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 16:28:52.268992 systemd-logind[1888]: Session 2 logged out. Waiting for processes to exit. Jan 29 16:28:52.271129 systemd-logind[1888]: Removed session 2. Jan 29 16:28:52.297513 systemd[1]: Started sshd@2-172.31.31.0:22-139.178.68.195:42872.service - OpenSSH per-connection server daemon (139.178.68.195:42872). Jan 29 16:28:52.520710 sshd[2134]: Accepted publickey for core from 139.178.68.195 port 42872 ssh2: RSA SHA256:vbUkVCMrs4gTXRs1sqC0OSq8ZkfijT0hqZd53m264iQ Jan 29 16:28:52.522947 sshd-session[2134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:28:52.541712 systemd-logind[1888]: New session 3 of user core. Jan 29 16:28:52.553107 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 16:28:52.702056 sshd[2136]: Connection closed by 139.178.68.195 port 42872 Jan 29 16:28:52.702840 sshd-session[2134]: pam_unix(sshd:session): session closed for user core Jan 29 16:28:52.708421 systemd[1]: sshd@2-172.31.31.0:22-139.178.68.195:42872.service: Deactivated successfully. Jan 29 16:28:52.711955 ntpd[1880]: Listen normally on 6 eth0 [fe80::441:f9ff:fec4:fa5b%2]:123 Jan 29 16:28:52.712607 ntpd[1880]: 29 Jan 16:28:52 ntpd[1880]: Listen normally on 6 eth0 [fe80::441:f9ff:fec4:fa5b%2]:123 Jan 29 16:28:52.712866 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 16:28:52.716393 systemd-logind[1888]: Session 3 logged out. Waiting for processes to exit. Jan 29 16:28:52.719259 systemd-logind[1888]: Removed session 3. Jan 29 16:28:53.034134 amazon-ssm-agent[2008]: 2025-01-29 16:28:53 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 29 16:28:53.135430 amazon-ssm-agent[2008]: 2025-01-29 16:28:53 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2142) started Jan 29 16:28:53.236118 amazon-ssm-agent[2008]: 2025-01-29 16:28:53 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 29 16:28:53.436460 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:28:53.438873 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 16:28:53.443405 systemd[1]: Startup finished in 670ms (kernel) + 8.088s (initrd) + 9.697s (userspace) = 18.457s. Jan 29 16:28:53.443593 (kubelet)[2157]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:28:54.719743 kubelet[2157]: E0129 16:28:54.719640 2157 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:28:54.722990 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:28:54.723690 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:28:54.724181 systemd[1]: kubelet.service: Consumed 991ms CPU time, 236.2M memory peak. Jan 29 16:28:57.161822 systemd-resolved[1818]: Clock change detected. Flushing caches. Jan 29 16:29:03.205112 systemd[1]: Started sshd@3-172.31.31.0:22-139.178.68.195:40120.service - OpenSSH per-connection server daemon (139.178.68.195:40120). Jan 29 16:29:03.410872 sshd[2169]: Accepted publickey for core from 139.178.68.195 port 40120 ssh2: RSA SHA256:vbUkVCMrs4gTXRs1sqC0OSq8ZkfijT0hqZd53m264iQ Jan 29 16:29:03.413138 sshd-session[2169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:29:03.434890 systemd-logind[1888]: New session 4 of user core. Jan 29 16:29:03.446947 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 16:29:03.577621 sshd[2171]: Connection closed by 139.178.68.195 port 40120 Jan 29 16:29:03.578346 sshd-session[2169]: pam_unix(sshd:session): session closed for user core Jan 29 16:29:03.590989 systemd[1]: sshd@3-172.31.31.0:22-139.178.68.195:40120.service: Deactivated successfully. Jan 29 16:29:03.594920 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 16:29:03.598609 systemd-logind[1888]: Session 4 logged out. Waiting for processes to exit. Jan 29 16:29:03.634924 systemd[1]: Started sshd@4-172.31.31.0:22-139.178.68.195:40128.service - OpenSSH per-connection server daemon (139.178.68.195:40128). Jan 29 16:29:03.637304 systemd-logind[1888]: Removed session 4. Jan 29 16:29:03.838288 sshd[2176]: Accepted publickey for core from 139.178.68.195 port 40128 ssh2: RSA SHA256:vbUkVCMrs4gTXRs1sqC0OSq8ZkfijT0hqZd53m264iQ Jan 29 16:29:03.840939 sshd-session[2176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:29:03.870286 systemd-logind[1888]: New session 5 of user core. Jan 29 16:29:03.883512 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 16:29:04.010038 sshd[2179]: Connection closed by 139.178.68.195 port 40128 Jan 29 16:29:04.010858 sshd-session[2176]: pam_unix(sshd:session): session closed for user core Jan 29 16:29:04.016742 systemd[1]: sshd@4-172.31.31.0:22-139.178.68.195:40128.service: Deactivated successfully. Jan 29 16:29:04.020568 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 16:29:04.023580 systemd-logind[1888]: Session 5 logged out. Waiting for processes to exit. Jan 29 16:29:04.026066 systemd-logind[1888]: Removed session 5. Jan 29 16:29:04.059568 systemd[1]: Started sshd@5-172.31.31.0:22-139.178.68.195:40130.service - OpenSSH per-connection server daemon (139.178.68.195:40130). Jan 29 16:29:04.281099 sshd[2185]: Accepted publickey for core from 139.178.68.195 port 40130 ssh2: RSA SHA256:vbUkVCMrs4gTXRs1sqC0OSq8ZkfijT0hqZd53m264iQ Jan 29 16:29:04.282602 sshd-session[2185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:29:04.291205 systemd-logind[1888]: New session 6 of user core. Jan 29 16:29:04.299430 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 16:29:04.425420 sshd[2187]: Connection closed by 139.178.68.195 port 40130 Jan 29 16:29:04.426119 sshd-session[2185]: pam_unix(sshd:session): session closed for user core Jan 29 16:29:04.433231 systemd[1]: sshd@5-172.31.31.0:22-139.178.68.195:40130.service: Deactivated successfully. Jan 29 16:29:04.438101 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 16:29:04.439078 systemd-logind[1888]: Session 6 logged out. Waiting for processes to exit. Jan 29 16:29:04.440576 systemd-logind[1888]: Removed session 6. Jan 29 16:29:04.466615 systemd[1]: Started sshd@6-172.31.31.0:22-139.178.68.195:40136.service - OpenSSH per-connection server daemon (139.178.68.195:40136). Jan 29 16:29:04.642443 sshd[2193]: Accepted publickey for core from 139.178.68.195 port 40136 ssh2: RSA SHA256:vbUkVCMrs4gTXRs1sqC0OSq8ZkfijT0hqZd53m264iQ Jan 29 16:29:04.644200 sshd-session[2193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:29:04.650035 systemd-logind[1888]: New session 7 of user core. Jan 29 16:29:04.665379 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 16:29:04.807431 sudo[2196]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 16:29:04.807952 sudo[2196]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:29:04.823935 sudo[2196]: pam_unix(sudo:session): session closed for user root Jan 29 16:29:04.846317 sshd[2195]: Connection closed by 139.178.68.195 port 40136 Jan 29 16:29:04.847353 sshd-session[2193]: pam_unix(sshd:session): session closed for user core Jan 29 16:29:04.851970 systemd-logind[1888]: Session 7 logged out. Waiting for processes to exit. Jan 29 16:29:04.852994 systemd[1]: sshd@6-172.31.31.0:22-139.178.68.195:40136.service: Deactivated successfully. Jan 29 16:29:04.855124 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 16:29:04.856253 systemd-logind[1888]: Removed session 7. Jan 29 16:29:04.887504 systemd[1]: Started sshd@7-172.31.31.0:22-139.178.68.195:54622.service - OpenSSH per-connection server daemon (139.178.68.195:54622). Jan 29 16:29:05.053309 sshd[2202]: Accepted publickey for core from 139.178.68.195 port 54622 ssh2: RSA SHA256:vbUkVCMrs4gTXRs1sqC0OSq8ZkfijT0hqZd53m264iQ Jan 29 16:29:05.055069 sshd-session[2202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:29:05.060225 systemd-logind[1888]: New session 8 of user core. Jan 29 16:29:05.076394 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 16:29:05.177790 sudo[2206]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 16:29:05.178305 sudo[2206]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:29:05.179920 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 16:29:05.190613 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:29:05.192004 sudo[2206]: pam_unix(sudo:session): session closed for user root Jan 29 16:29:05.208649 sudo[2205]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 16:29:05.209058 sudo[2205]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:29:05.235388 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:29:05.278121 augenrules[2231]: No rules Jan 29 16:29:05.279984 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:29:05.280540 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:29:05.281691 sudo[2205]: pam_unix(sudo:session): session closed for user root Jan 29 16:29:05.307249 sshd[2204]: Connection closed by 139.178.68.195 port 54622 Jan 29 16:29:05.308558 sshd-session[2202]: pam_unix(sshd:session): session closed for user core Jan 29 16:29:05.312070 systemd[1]: sshd@7-172.31.31.0:22-139.178.68.195:54622.service: Deactivated successfully. Jan 29 16:29:05.314819 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 16:29:05.317994 systemd-logind[1888]: Session 8 logged out. Waiting for processes to exit. Jan 29 16:29:05.319868 systemd-logind[1888]: Removed session 8. Jan 29 16:29:05.342569 systemd[1]: Started sshd@8-172.31.31.0:22-139.178.68.195:54626.service - OpenSSH per-connection server daemon (139.178.68.195:54626). Jan 29 16:29:05.503199 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:29:05.525719 (kubelet)[2247]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:29:05.538333 sshd[2240]: Accepted publickey for core from 139.178.68.195 port 54626 ssh2: RSA SHA256:vbUkVCMrs4gTXRs1sqC0OSq8ZkfijT0hqZd53m264iQ Jan 29 16:29:05.541916 sshd-session[2240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:29:05.551373 systemd-logind[1888]: New session 9 of user core. Jan 29 16:29:05.557725 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 16:29:05.598196 kubelet[2247]: E0129 16:29:05.597854 2247 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:29:05.602030 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:29:05.602234 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:29:05.602601 systemd[1]: kubelet.service: Consumed 168ms CPU time, 98.1M memory peak. Jan 29 16:29:05.659621 sudo[2255]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 16:29:05.660013 sudo[2255]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:29:06.905584 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:29:06.905999 systemd[1]: kubelet.service: Consumed 168ms CPU time, 98.1M memory peak. Jan 29 16:29:06.912626 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:29:06.954392 systemd[1]: Reload requested from client PID 2287 ('systemctl') (unit session-9.scope)... Jan 29 16:29:06.954421 systemd[1]: Reloading... Jan 29 16:29:07.183587 zram_generator::config[2335]: No configuration found. Jan 29 16:29:07.369397 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:29:07.561989 systemd[1]: Reloading finished in 606 ms. Jan 29 16:29:07.648880 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:29:07.662432 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:29:07.664447 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:29:07.664720 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:29:07.664781 systemd[1]: kubelet.service: Consumed 116ms CPU time, 83.5M memory peak. Jan 29 16:29:07.669833 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:29:07.968531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:29:07.977468 (kubelet)[2394]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:29:08.041033 kubelet[2394]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:29:08.041033 kubelet[2394]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:29:08.041033 kubelet[2394]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:29:08.042774 kubelet[2394]: I0129 16:29:08.042707 2394 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:29:08.754340 kubelet[2394]: I0129 16:29:08.751652 2394 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 16:29:08.754340 kubelet[2394]: I0129 16:29:08.751694 2394 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:29:08.754340 kubelet[2394]: I0129 16:29:08.752522 2394 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 16:29:08.823053 kubelet[2394]: I0129 16:29:08.822215 2394 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:29:08.839989 kubelet[2394]: E0129 16:29:08.839931 2394 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 16:29:08.839989 kubelet[2394]: I0129 16:29:08.839967 2394 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 16:29:08.846215 kubelet[2394]: I0129 16:29:08.846183 2394 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:29:08.862718 kubelet[2394]: I0129 16:29:08.854025 2394 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 16:29:08.862849 kubelet[2394]: I0129 16:29:08.862809 2394 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:29:08.866752 kubelet[2394]: I0129 16:29:08.862849 2394 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.31.0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 16:29:08.867002 kubelet[2394]: I0129 16:29:08.866778 2394 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:29:08.867002 kubelet[2394]: I0129 16:29:08.866798 2394 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 16:29:08.867002 kubelet[2394]: I0129 16:29:08.866994 2394 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:29:08.883541 kubelet[2394]: I0129 16:29:08.883496 2394 kubelet.go:408] "Attempting to sync node with API server" Jan 29 16:29:08.883541 kubelet[2394]: I0129 16:29:08.883544 2394 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:29:08.883720 kubelet[2394]: I0129 16:29:08.883588 2394 kubelet.go:314] "Adding apiserver pod source" Jan 29 16:29:08.883720 kubelet[2394]: I0129 16:29:08.883606 2394 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:29:08.899243 kubelet[2394]: E0129 16:29:08.898953 2394 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:08.899243 kubelet[2394]: E0129 16:29:08.899018 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:08.907739 kubelet[2394]: I0129 16:29:08.907212 2394 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:29:08.913728 kubelet[2394]: I0129 16:29:08.913693 2394 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:29:08.916070 kubelet[2394]: W0129 16:29:08.916033 2394 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 16:29:08.917385 kubelet[2394]: I0129 16:29:08.917358 2394 server.go:1269] "Started kubelet" Jan 29 16:29:08.922169 kubelet[2394]: I0129 16:29:08.920920 2394 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:29:08.923731 kubelet[2394]: I0129 16:29:08.923705 2394 server.go:460] "Adding debug handlers to kubelet server" Jan 29 16:29:08.927505 kubelet[2394]: I0129 16:29:08.927435 2394 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:29:08.928022 kubelet[2394]: I0129 16:29:08.928002 2394 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:29:08.929019 kubelet[2394]: I0129 16:29:08.928997 2394 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:29:08.936001 kubelet[2394]: I0129 16:29:08.934674 2394 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 16:29:08.949144 kubelet[2394]: E0129 16:29:08.944373 2394 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.31.0.181f36b8e41da0b0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.31.0,UID:172.31.31.0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.31.0,},FirstTimestamp:2025-01-29 16:29:08.917330096 +0000 UTC m=+0.933477249,LastTimestamp:2025-01-29 16:29:08.917330096 +0000 UTC m=+0.933477249,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.31.0,}" Jan 29 16:29:08.951971 kubelet[2394]: W0129 16:29:08.951786 2394 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 29 16:29:08.951971 kubelet[2394]: E0129 16:29:08.951884 2394 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 29 16:29:08.952199 kubelet[2394]: W0129 16:29:08.952139 2394 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.31.0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 29 16:29:08.952335 kubelet[2394]: E0129 16:29:08.952211 2394 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"172.31.31.0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 29 16:29:08.953966 kubelet[2394]: I0129 16:29:08.953940 2394 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 16:29:08.954105 kubelet[2394]: I0129 16:29:08.954087 2394 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 16:29:08.954224 kubelet[2394]: I0129 16:29:08.954170 2394 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:29:08.955029 kubelet[2394]: I0129 16:29:08.955006 2394 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:29:08.955227 kubelet[2394]: I0129 16:29:08.955144 2394 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:29:08.957781 kubelet[2394]: E0129 16:29:08.957757 2394 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:29:08.959014 kubelet[2394]: E0129 16:29:08.958943 2394 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.31.0\" not found" Jan 29 16:29:08.959883 kubelet[2394]: I0129 16:29:08.959851 2394 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:29:08.985669 kubelet[2394]: E0129 16:29:08.984347 2394 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.31.0\" not found" node="172.31.31.0" Jan 29 16:29:08.986519 kubelet[2394]: I0129 16:29:08.986476 2394 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 16:29:08.986519 kubelet[2394]: I0129 16:29:08.986493 2394 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 16:29:08.986519 kubelet[2394]: I0129 16:29:08.986516 2394 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:29:08.992920 kubelet[2394]: I0129 16:29:08.992804 2394 policy_none.go:49] "None policy: Start" Jan 29 16:29:08.993740 kubelet[2394]: I0129 16:29:08.993718 2394 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:29:08.993883 kubelet[2394]: I0129 16:29:08.993745 2394 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:29:09.010369 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 16:29:09.035614 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 16:29:09.046255 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 16:29:09.055710 kubelet[2394]: I0129 16:29:09.055581 2394 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:29:09.056406 kubelet[2394]: I0129 16:29:09.056018 2394 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 16:29:09.056406 kubelet[2394]: I0129 16:29:09.056035 2394 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:29:09.057757 kubelet[2394]: I0129 16:29:09.057245 2394 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:29:09.065585 kubelet[2394]: E0129 16:29:09.065502 2394 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.31.0\" not found" Jan 29 16:29:09.141214 kubelet[2394]: I0129 16:29:09.141086 2394 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:29:09.144307 kubelet[2394]: I0129 16:29:09.144268 2394 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:29:09.144415 kubelet[2394]: I0129 16:29:09.144319 2394 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:29:09.144415 kubelet[2394]: I0129 16:29:09.144343 2394 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 16:29:09.144883 kubelet[2394]: E0129 16:29:09.144459 2394 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 29 16:29:09.158212 kubelet[2394]: I0129 16:29:09.157527 2394 kubelet_node_status.go:72] "Attempting to register node" node="172.31.31.0" Jan 29 16:29:09.163457 kubelet[2394]: I0129 16:29:09.163421 2394 kubelet_node_status.go:75] "Successfully registered node" node="172.31.31.0" Jan 29 16:29:09.163457 kubelet[2394]: E0129 16:29:09.163458 2394 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.31.0\": node \"172.31.31.0\" not found" Jan 29 16:29:09.184205 kubelet[2394]: E0129 16:29:09.184143 2394 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.31.0\" not found" Jan 29 16:29:09.285378 kubelet[2394]: E0129 16:29:09.285240 2394 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.31.0\" not found" Jan 29 16:29:09.386425 kubelet[2394]: E0129 16:29:09.386284 2394 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.31.0\" not found" Jan 29 16:29:09.487347 kubelet[2394]: E0129 16:29:09.487289 2394 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.31.0\" not found" Jan 29 16:29:09.529266 sudo[2255]: pam_unix(sudo:session): session closed for user root Jan 29 16:29:09.552656 sshd[2253]: Connection closed by 139.178.68.195 port 54626 Jan 29 16:29:09.553873 sshd-session[2240]: pam_unix(sshd:session): session closed for user core Jan 29 16:29:09.561812 systemd[1]: sshd@8-172.31.31.0:22-139.178.68.195:54626.service: Deactivated successfully. Jan 29 16:29:09.565503 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 16:29:09.566008 systemd[1]: session-9.scope: Consumed 496ms CPU time, 73M memory peak. Jan 29 16:29:09.569332 systemd-logind[1888]: Session 9 logged out. Waiting for processes to exit. Jan 29 16:29:09.570611 systemd-logind[1888]: Removed session 9. Jan 29 16:29:09.587706 kubelet[2394]: E0129 16:29:09.587666 2394 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.31.0\" not found" Jan 29 16:29:09.688548 kubelet[2394]: E0129 16:29:09.688496 2394 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.31.0\" not found" Jan 29 16:29:09.766233 kubelet[2394]: I0129 16:29:09.766184 2394 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 16:29:09.766428 kubelet[2394]: W0129 16:29:09.766403 2394 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 16:29:09.766487 kubelet[2394]: W0129 16:29:09.766448 2394 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 16:29:09.789569 kubelet[2394]: E0129 16:29:09.789528 2394 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.31.0\" not found" Jan 29 16:29:09.891128 kubelet[2394]: I0129 16:29:09.891002 2394 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 29 16:29:09.892516 kubelet[2394]: I0129 16:29:09.892045 2394 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 29 16:29:09.892611 containerd[1903]: time="2025-01-29T16:29:09.891586965Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 16:29:09.899389 kubelet[2394]: I0129 16:29:09.899339 2394 apiserver.go:52] "Watching apiserver" Jan 29 16:29:09.899614 kubelet[2394]: E0129 16:29:09.899346 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:09.930927 systemd[1]: Created slice kubepods-burstable-pod8b0e7eaf_80d3_4dfa_8e7b_99c4fc7e5448.slice - libcontainer container kubepods-burstable-pod8b0e7eaf_80d3_4dfa_8e7b_99c4fc7e5448.slice. Jan 29 16:29:09.954845 kubelet[2394]: I0129 16:29:09.954659 2394 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 16:29:09.956662 systemd[1]: Created slice kubepods-besteffort-pod27a7a954_325d_4147_b1f0_2385a48d659c.slice - libcontainer container kubepods-besteffort-pod27a7a954_325d_4147_b1f0_2385a48d659c.slice. Jan 29 16:29:09.960001 kubelet[2394]: I0129 16:29:09.959970 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-cilium-run\") pod \"cilium-rn4ds\" (UID: \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\") " pod="kube-system/cilium-rn4ds" Jan 29 16:29:09.960001 kubelet[2394]: I0129 16:29:09.960017 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-bpf-maps\") pod \"cilium-rn4ds\" (UID: \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\") " pod="kube-system/cilium-rn4ds" Jan 29 16:29:09.960616 kubelet[2394]: I0129 16:29:09.960046 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-hostproc\") pod \"cilium-rn4ds\" (UID: \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\") " pod="kube-system/cilium-rn4ds" Jan 29 16:29:09.960616 kubelet[2394]: I0129 16:29:09.960067 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-lib-modules\") pod \"cilium-rn4ds\" (UID: \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\") " pod="kube-system/cilium-rn4ds" Jan 29 16:29:09.960616 kubelet[2394]: I0129 16:29:09.960104 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-host-proc-sys-kernel\") pod \"cilium-rn4ds\" (UID: \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\") " pod="kube-system/cilium-rn4ds" Jan 29 16:29:09.960616 kubelet[2394]: I0129 16:29:09.960126 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/27a7a954-325d-4147-b1f0-2385a48d659c-kube-proxy\") pod \"kube-proxy-r25qg\" (UID: \"27a7a954-325d-4147-b1f0-2385a48d659c\") " pod="kube-system/kube-proxy-r25qg" Jan 29 16:29:09.960616 kubelet[2394]: I0129 16:29:09.960164 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/27a7a954-325d-4147-b1f0-2385a48d659c-lib-modules\") pod \"kube-proxy-r25qg\" (UID: \"27a7a954-325d-4147-b1f0-2385a48d659c\") " pod="kube-system/kube-proxy-r25qg" Jan 29 16:29:09.960616 kubelet[2394]: I0129 16:29:09.960189 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-cni-path\") pod \"cilium-rn4ds\" (UID: \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\") " pod="kube-system/cilium-rn4ds" Jan 29 16:29:09.960953 kubelet[2394]: I0129 16:29:09.960216 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-etc-cni-netd\") pod \"cilium-rn4ds\" (UID: \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\") " pod="kube-system/cilium-rn4ds" Jan 29 16:29:09.960953 kubelet[2394]: I0129 16:29:09.960238 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-clustermesh-secrets\") pod \"cilium-rn4ds\" (UID: \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\") " pod="kube-system/cilium-rn4ds" Jan 29 16:29:09.960953 kubelet[2394]: I0129 16:29:09.960261 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-cilium-config-path\") pod \"cilium-rn4ds\" (UID: \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\") " pod="kube-system/cilium-rn4ds" Jan 29 16:29:09.961386 kubelet[2394]: I0129 16:29:09.960308 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-host-proc-sys-net\") pod \"cilium-rn4ds\" (UID: \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\") " pod="kube-system/cilium-rn4ds" Jan 29 16:29:09.961386 kubelet[2394]: I0129 16:29:09.961200 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-hubble-tls\") pod \"cilium-rn4ds\" (UID: \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\") " pod="kube-system/cilium-rn4ds" Jan 29 16:29:09.961386 kubelet[2394]: I0129 16:29:09.961235 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxl68\" (UniqueName: \"kubernetes.io/projected/27a7a954-325d-4147-b1f0-2385a48d659c-kube-api-access-nxl68\") pod \"kube-proxy-r25qg\" (UID: \"27a7a954-325d-4147-b1f0-2385a48d659c\") " pod="kube-system/kube-proxy-r25qg" Jan 29 16:29:09.961386 kubelet[2394]: I0129 16:29:09.961273 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-cilium-cgroup\") pod \"cilium-rn4ds\" (UID: \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\") " pod="kube-system/cilium-rn4ds" Jan 29 16:29:09.961386 kubelet[2394]: I0129 16:29:09.961298 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pff7\" (UniqueName: \"kubernetes.io/projected/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-kube-api-access-4pff7\") pod \"cilium-rn4ds\" (UID: \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\") " pod="kube-system/cilium-rn4ds" Jan 29 16:29:09.961699 kubelet[2394]: I0129 16:29:09.961320 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-xtables-lock\") pod \"cilium-rn4ds\" (UID: \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\") " pod="kube-system/cilium-rn4ds" Jan 29 16:29:09.961699 kubelet[2394]: I0129 16:29:09.961343 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/27a7a954-325d-4147-b1f0-2385a48d659c-xtables-lock\") pod \"kube-proxy-r25qg\" (UID: \"27a7a954-325d-4147-b1f0-2385a48d659c\") " pod="kube-system/kube-proxy-r25qg" Jan 29 16:29:10.254067 containerd[1903]: time="2025-01-29T16:29:10.253946333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rn4ds,Uid:8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448,Namespace:kube-system,Attempt:0,}" Jan 29 16:29:10.264960 containerd[1903]: time="2025-01-29T16:29:10.264916905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r25qg,Uid:27a7a954-325d-4147-b1f0-2385a48d659c,Namespace:kube-system,Attempt:0,}" Jan 29 16:29:10.806692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1918895454.mount: Deactivated successfully. Jan 29 16:29:10.817118 containerd[1903]: time="2025-01-29T16:29:10.817058763Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:29:10.822079 containerd[1903]: time="2025-01-29T16:29:10.822000891Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 16:29:10.823214 containerd[1903]: time="2025-01-29T16:29:10.823166469Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:29:10.824882 containerd[1903]: time="2025-01-29T16:29:10.824819750Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:29:10.827171 containerd[1903]: time="2025-01-29T16:29:10.827078264Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:29:10.836227 containerd[1903]: time="2025-01-29T16:29:10.836173872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:29:10.839904 containerd[1903]: time="2025-01-29T16:29:10.839813925Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 574.679576ms" Jan 29 16:29:10.848104 containerd[1903]: time="2025-01-29T16:29:10.848055187Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 593.978379ms" Jan 29 16:29:10.902417 kubelet[2394]: E0129 16:29:10.901070 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:11.095720 containerd[1903]: time="2025-01-29T16:29:11.093570012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:29:11.095720 containerd[1903]: time="2025-01-29T16:29:11.094859205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:29:11.095720 containerd[1903]: time="2025-01-29T16:29:11.094897437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:29:11.095720 containerd[1903]: time="2025-01-29T16:29:11.095020047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:29:11.096526 containerd[1903]: time="2025-01-29T16:29:11.095218810Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:29:11.096526 containerd[1903]: time="2025-01-29T16:29:11.095304669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:29:11.096526 containerd[1903]: time="2025-01-29T16:29:11.095354176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:29:11.096526 containerd[1903]: time="2025-01-29T16:29:11.095469756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:29:11.264349 systemd[1]: run-containerd-runc-k8s.io-60d400afe80d4ce2cfd66f6cdb622f0659eba5509d0bc4e28d428b7f90dc48c2-runc.OH6ijq.mount: Deactivated successfully. Jan 29 16:29:11.280487 systemd[1]: Started cri-containerd-061962fca644e7ac2b370e599f489915fa52f33ecabd50788dd089ef8c79187a.scope - libcontainer container 061962fca644e7ac2b370e599f489915fa52f33ecabd50788dd089ef8c79187a. Jan 29 16:29:11.285000 systemd[1]: Started cri-containerd-60d400afe80d4ce2cfd66f6cdb622f0659eba5509d0bc4e28d428b7f90dc48c2.scope - libcontainer container 60d400afe80d4ce2cfd66f6cdb622f0659eba5509d0bc4e28d428b7f90dc48c2. Jan 29 16:29:11.332520 containerd[1903]: time="2025-01-29T16:29:11.332370487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rn4ds,Uid:8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448,Namespace:kube-system,Attempt:0,} returns sandbox id \"061962fca644e7ac2b370e599f489915fa52f33ecabd50788dd089ef8c79187a\"" Jan 29 16:29:11.338176 containerd[1903]: time="2025-01-29T16:29:11.337025404Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 16:29:11.360663 containerd[1903]: time="2025-01-29T16:29:11.360109103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r25qg,Uid:27a7a954-325d-4147-b1f0-2385a48d659c,Namespace:kube-system,Attempt:0,} returns sandbox id \"60d400afe80d4ce2cfd66f6cdb622f0659eba5509d0bc4e28d428b7f90dc48c2\"" Jan 29 16:29:11.902279 kubelet[2394]: E0129 16:29:11.902224 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:12.902689 kubelet[2394]: E0129 16:29:12.902627 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:13.905545 kubelet[2394]: E0129 16:29:13.903825 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:14.904947 kubelet[2394]: E0129 16:29:14.904864 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:15.904993 kubelet[2394]: E0129 16:29:15.904957 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:16.906382 kubelet[2394]: E0129 16:29:16.906293 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:17.861644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount282944197.mount: Deactivated successfully. Jan 29 16:29:17.907527 kubelet[2394]: E0129 16:29:17.907396 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:18.908652 kubelet[2394]: E0129 16:29:18.908300 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:19.908517 kubelet[2394]: E0129 16:29:19.908471 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:20.909179 kubelet[2394]: E0129 16:29:20.909129 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:21.216022 containerd[1903]: time="2025-01-29T16:29:21.215463564Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:29:21.219178 containerd[1903]: time="2025-01-29T16:29:21.218055575Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 29 16:29:21.221347 containerd[1903]: time="2025-01-29T16:29:21.219924175Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:29:21.231478 containerd[1903]: time="2025-01-29T16:29:21.231340189Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.8942725s" Jan 29 16:29:21.231645 containerd[1903]: time="2025-01-29T16:29:21.231495909Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 29 16:29:21.237165 containerd[1903]: time="2025-01-29T16:29:21.236132566Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 29 16:29:21.238064 containerd[1903]: time="2025-01-29T16:29:21.238024090Z" level=info msg="CreateContainer within sandbox \"061962fca644e7ac2b370e599f489915fa52f33ecabd50788dd089ef8c79187a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 16:29:21.261948 containerd[1903]: time="2025-01-29T16:29:21.261902596Z" level=info msg="CreateContainer within sandbox \"061962fca644e7ac2b370e599f489915fa52f33ecabd50788dd089ef8c79187a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4e20339a22f043e811ff64d603be773cb4be1ce464d8b24b50e70e2e8547edb5\"" Jan 29 16:29:21.263409 containerd[1903]: time="2025-01-29T16:29:21.263376601Z" level=info msg="StartContainer for \"4e20339a22f043e811ff64d603be773cb4be1ce464d8b24b50e70e2e8547edb5\"" Jan 29 16:29:21.296574 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 29 16:29:21.349716 systemd[1]: Started cri-containerd-4e20339a22f043e811ff64d603be773cb4be1ce464d8b24b50e70e2e8547edb5.scope - libcontainer container 4e20339a22f043e811ff64d603be773cb4be1ce464d8b24b50e70e2e8547edb5. Jan 29 16:29:21.401027 containerd[1903]: time="2025-01-29T16:29:21.400966496Z" level=info msg="StartContainer for \"4e20339a22f043e811ff64d603be773cb4be1ce464d8b24b50e70e2e8547edb5\" returns successfully" Jan 29 16:29:21.424076 systemd[1]: cri-containerd-4e20339a22f043e811ff64d603be773cb4be1ce464d8b24b50e70e2e8547edb5.scope: Deactivated successfully. Jan 29 16:29:21.783464 containerd[1903]: time="2025-01-29T16:29:21.783378725Z" level=info msg="shim disconnected" id=4e20339a22f043e811ff64d603be773cb4be1ce464d8b24b50e70e2e8547edb5 namespace=k8s.io Jan 29 16:29:21.783464 containerd[1903]: time="2025-01-29T16:29:21.783449552Z" level=warning msg="cleaning up after shim disconnected" id=4e20339a22f043e811ff64d603be773cb4be1ce464d8b24b50e70e2e8547edb5 namespace=k8s.io Jan 29 16:29:21.783464 containerd[1903]: time="2025-01-29T16:29:21.783462945Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:29:21.910453 kubelet[2394]: E0129 16:29:21.910365 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:22.222564 containerd[1903]: time="2025-01-29T16:29:22.222356110Z" level=info msg="CreateContainer within sandbox \"061962fca644e7ac2b370e599f489915fa52f33ecabd50788dd089ef8c79187a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 16:29:22.262051 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e20339a22f043e811ff64d603be773cb4be1ce464d8b24b50e70e2e8547edb5-rootfs.mount: Deactivated successfully. Jan 29 16:29:22.269188 containerd[1903]: time="2025-01-29T16:29:22.269124700Z" level=info msg="CreateContainer within sandbox \"061962fca644e7ac2b370e599f489915fa52f33ecabd50788dd089ef8c79187a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d6647a262d6d6cca9717ceba86243498b59da66efc83b494f92c8c5460f4fae8\"" Jan 29 16:29:22.278171 containerd[1903]: time="2025-01-29T16:29:22.276578473Z" level=info msg="StartContainer for \"d6647a262d6d6cca9717ceba86243498b59da66efc83b494f92c8c5460f4fae8\"" Jan 29 16:29:22.366393 systemd[1]: Started cri-containerd-d6647a262d6d6cca9717ceba86243498b59da66efc83b494f92c8c5460f4fae8.scope - libcontainer container d6647a262d6d6cca9717ceba86243498b59da66efc83b494f92c8c5460f4fae8. Jan 29 16:29:22.447101 containerd[1903]: time="2025-01-29T16:29:22.447052938Z" level=info msg="StartContainer for \"d6647a262d6d6cca9717ceba86243498b59da66efc83b494f92c8c5460f4fae8\" returns successfully" Jan 29 16:29:22.476671 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:29:22.477524 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:29:22.480537 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:29:22.491886 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:29:22.499067 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 29 16:29:22.502482 systemd[1]: cri-containerd-d6647a262d6d6cca9717ceba86243498b59da66efc83b494f92c8c5460f4fae8.scope: Deactivated successfully. Jan 29 16:29:22.556438 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:29:22.695429 containerd[1903]: time="2025-01-29T16:29:22.695291873Z" level=info msg="shim disconnected" id=d6647a262d6d6cca9717ceba86243498b59da66efc83b494f92c8c5460f4fae8 namespace=k8s.io Jan 29 16:29:22.696263 containerd[1903]: time="2025-01-29T16:29:22.695982203Z" level=warning msg="cleaning up after shim disconnected" id=d6647a262d6d6cca9717ceba86243498b59da66efc83b494f92c8c5460f4fae8 namespace=k8s.io Jan 29 16:29:22.696263 containerd[1903]: time="2025-01-29T16:29:22.696009569Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:29:22.912084 kubelet[2394]: E0129 16:29:22.911882 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:23.218120 containerd[1903]: time="2025-01-29T16:29:23.217692658Z" level=info msg="CreateContainer within sandbox \"061962fca644e7ac2b370e599f489915fa52f33ecabd50788dd089ef8c79187a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 16:29:23.243731 containerd[1903]: time="2025-01-29T16:29:23.243596681Z" level=info msg="CreateContainer within sandbox \"061962fca644e7ac2b370e599f489915fa52f33ecabd50788dd089ef8c79187a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"697d5ae59082fe39ca62126f030922452ab241219bfe2c5042ba5c133eba7c02\"" Jan 29 16:29:23.246339 containerd[1903]: time="2025-01-29T16:29:23.245648251Z" level=info msg="StartContainer for \"697d5ae59082fe39ca62126f030922452ab241219bfe2c5042ba5c133eba7c02\"" Jan 29 16:29:23.256725 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6647a262d6d6cca9717ceba86243498b59da66efc83b494f92c8c5460f4fae8-rootfs.mount: Deactivated successfully. Jan 29 16:29:23.256888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3390906169.mount: Deactivated successfully. Jan 29 16:29:23.328247 systemd[1]: Started cri-containerd-697d5ae59082fe39ca62126f030922452ab241219bfe2c5042ba5c133eba7c02.scope - libcontainer container 697d5ae59082fe39ca62126f030922452ab241219bfe2c5042ba5c133eba7c02. Jan 29 16:29:23.399534 systemd[1]: cri-containerd-697d5ae59082fe39ca62126f030922452ab241219bfe2c5042ba5c133eba7c02.scope: Deactivated successfully. Jan 29 16:29:23.409847 containerd[1903]: time="2025-01-29T16:29:23.409719620Z" level=info msg="StartContainer for \"697d5ae59082fe39ca62126f030922452ab241219bfe2c5042ba5c133eba7c02\" returns successfully" Jan 29 16:29:23.452167 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-697d5ae59082fe39ca62126f030922452ab241219bfe2c5042ba5c133eba7c02-rootfs.mount: Deactivated successfully. Jan 29 16:29:23.559611 containerd[1903]: time="2025-01-29T16:29:23.559224859Z" level=info msg="shim disconnected" id=697d5ae59082fe39ca62126f030922452ab241219bfe2c5042ba5c133eba7c02 namespace=k8s.io Jan 29 16:29:23.559611 containerd[1903]: time="2025-01-29T16:29:23.559524171Z" level=warning msg="cleaning up after shim disconnected" id=697d5ae59082fe39ca62126f030922452ab241219bfe2c5042ba5c133eba7c02 namespace=k8s.io Jan 29 16:29:23.559611 containerd[1903]: time="2025-01-29T16:29:23.559539071Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:29:23.912680 kubelet[2394]: E0129 16:29:23.912434 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:24.042220 containerd[1903]: time="2025-01-29T16:29:24.042169410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:29:24.043511 containerd[1903]: time="2025-01-29T16:29:24.043326574Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231128" Jan 29 16:29:24.045747 containerd[1903]: time="2025-01-29T16:29:24.044531008Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:29:24.047906 containerd[1903]: time="2025-01-29T16:29:24.046814019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:29:24.047906 containerd[1903]: time="2025-01-29T16:29:24.047762831Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 2.811569036s" Jan 29 16:29:24.047906 containerd[1903]: time="2025-01-29T16:29:24.047798299Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 29 16:29:24.050242 containerd[1903]: time="2025-01-29T16:29:24.050211233Z" level=info msg="CreateContainer within sandbox \"60d400afe80d4ce2cfd66f6cdb622f0659eba5509d0bc4e28d428b7f90dc48c2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 16:29:24.096515 containerd[1903]: time="2025-01-29T16:29:24.096463480Z" level=info msg="CreateContainer within sandbox \"60d400afe80d4ce2cfd66f6cdb622f0659eba5509d0bc4e28d428b7f90dc48c2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d81a537a71dacde8290365dffafa93b2c63d73fb680234d2cc6db2c3935327d1\"" Jan 29 16:29:24.098019 containerd[1903]: time="2025-01-29T16:29:24.097976241Z" level=info msg="StartContainer for \"d81a537a71dacde8290365dffafa93b2c63d73fb680234d2cc6db2c3935327d1\"" Jan 29 16:29:24.131360 systemd[1]: Started cri-containerd-d81a537a71dacde8290365dffafa93b2c63d73fb680234d2cc6db2c3935327d1.scope - libcontainer container d81a537a71dacde8290365dffafa93b2c63d73fb680234d2cc6db2c3935327d1. Jan 29 16:29:24.177573 containerd[1903]: time="2025-01-29T16:29:24.177362245Z" level=info msg="StartContainer for \"d81a537a71dacde8290365dffafa93b2c63d73fb680234d2cc6db2c3935327d1\" returns successfully" Jan 29 16:29:24.222169 containerd[1903]: time="2025-01-29T16:29:24.219929662Z" level=info msg="CreateContainer within sandbox \"061962fca644e7ac2b370e599f489915fa52f33ecabd50788dd089ef8c79187a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 16:29:24.292172 kubelet[2394]: I0129 16:29:24.291207 2394 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-r25qg" podStartSLOduration=2.606288068 podStartE2EDuration="15.291187014s" podCreationTimestamp="2025-01-29 16:29:09 +0000 UTC" firstStartedPulling="2025-01-29 16:29:11.363884351 +0000 UTC m=+3.380031496" lastFinishedPulling="2025-01-29 16:29:24.048783296 +0000 UTC m=+16.064930442" observedRunningTime="2025-01-29 16:29:24.290867307 +0000 UTC m=+16.307014465" watchObservedRunningTime="2025-01-29 16:29:24.291187014 +0000 UTC m=+16.307334173" Jan 29 16:29:24.295035 containerd[1903]: time="2025-01-29T16:29:24.293126540Z" level=info msg="CreateContainer within sandbox \"061962fca644e7ac2b370e599f489915fa52f33ecabd50788dd089ef8c79187a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"178f0a3c82339ffb5c9ed72c16dfda07bfe3184334db1226c0675c5318fed39f\"" Jan 29 16:29:24.297016 containerd[1903]: time="2025-01-29T16:29:24.296981215Z" level=info msg="StartContainer for \"178f0a3c82339ffb5c9ed72c16dfda07bfe3184334db1226c0675c5318fed39f\"" Jan 29 16:29:24.397414 systemd[1]: Started cri-containerd-178f0a3c82339ffb5c9ed72c16dfda07bfe3184334db1226c0675c5318fed39f.scope - libcontainer container 178f0a3c82339ffb5c9ed72c16dfda07bfe3184334db1226c0675c5318fed39f. Jan 29 16:29:24.456672 systemd[1]: cri-containerd-178f0a3c82339ffb5c9ed72c16dfda07bfe3184334db1226c0675c5318fed39f.scope: Deactivated successfully. Jan 29 16:29:24.462568 containerd[1903]: time="2025-01-29T16:29:24.462527176Z" level=info msg="StartContainer for \"178f0a3c82339ffb5c9ed72c16dfda07bfe3184334db1226c0675c5318fed39f\" returns successfully" Jan 29 16:29:24.463320 containerd[1903]: time="2025-01-29T16:29:24.459312486Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b0e7eaf_80d3_4dfa_8e7b_99c4fc7e5448.slice/cri-containerd-178f0a3c82339ffb5c9ed72c16dfda07bfe3184334db1226c0675c5318fed39f.scope/memory.events\": no such file or directory" Jan 29 16:29:24.509223 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-178f0a3c82339ffb5c9ed72c16dfda07bfe3184334db1226c0675c5318fed39f-rootfs.mount: Deactivated successfully. Jan 29 16:29:24.618259 containerd[1903]: time="2025-01-29T16:29:24.618192233Z" level=info msg="shim disconnected" id=178f0a3c82339ffb5c9ed72c16dfda07bfe3184334db1226c0675c5318fed39f namespace=k8s.io Jan 29 16:29:24.618259 containerd[1903]: time="2025-01-29T16:29:24.618252934Z" level=warning msg="cleaning up after shim disconnected" id=178f0a3c82339ffb5c9ed72c16dfda07bfe3184334db1226c0675c5318fed39f namespace=k8s.io Jan 29 16:29:24.618259 containerd[1903]: time="2025-01-29T16:29:24.618264016Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:29:24.914394 kubelet[2394]: E0129 16:29:24.914239 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:25.244207 containerd[1903]: time="2025-01-29T16:29:25.243797487Z" level=info msg="CreateContainer within sandbox \"061962fca644e7ac2b370e599f489915fa52f33ecabd50788dd089ef8c79187a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 16:29:25.277639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3220675145.mount: Deactivated successfully. Jan 29 16:29:25.280266 containerd[1903]: time="2025-01-29T16:29:25.279856629Z" level=info msg="CreateContainer within sandbox \"061962fca644e7ac2b370e599f489915fa52f33ecabd50788dd089ef8c79187a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"91a613218fe72be8d61246c9a4afc98b250c0b85db75188ffe81cccceaa92f21\"" Jan 29 16:29:25.281057 containerd[1903]: time="2025-01-29T16:29:25.280972324Z" level=info msg="StartContainer for \"91a613218fe72be8d61246c9a4afc98b250c0b85db75188ffe81cccceaa92f21\"" Jan 29 16:29:25.341666 systemd[1]: Started cri-containerd-91a613218fe72be8d61246c9a4afc98b250c0b85db75188ffe81cccceaa92f21.scope - libcontainer container 91a613218fe72be8d61246c9a4afc98b250c0b85db75188ffe81cccceaa92f21. Jan 29 16:29:25.382647 containerd[1903]: time="2025-01-29T16:29:25.382518942Z" level=info msg="StartContainer for \"91a613218fe72be8d61246c9a4afc98b250c0b85db75188ffe81cccceaa92f21\" returns successfully" Jan 29 16:29:25.566584 kubelet[2394]: I0129 16:29:25.566407 2394 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 29 16:29:25.915099 kubelet[2394]: E0129 16:29:25.914766 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:26.135181 kernel: Initializing XFRM netlink socket Jan 29 16:29:26.273220 kubelet[2394]: I0129 16:29:26.272848 2394 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rn4ds" podStartSLOduration=7.373892059 podStartE2EDuration="17.272763232s" podCreationTimestamp="2025-01-29 16:29:09 +0000 UTC" firstStartedPulling="2025-01-29 16:29:11.335886957 +0000 UTC m=+3.352034100" lastFinishedPulling="2025-01-29 16:29:21.234758136 +0000 UTC m=+13.250905273" observedRunningTime="2025-01-29 16:29:26.272251434 +0000 UTC m=+18.288398594" watchObservedRunningTime="2025-01-29 16:29:26.272763232 +0000 UTC m=+18.288910377" Jan 29 16:29:26.915684 kubelet[2394]: E0129 16:29:26.915628 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:27.911130 systemd-networkd[1817]: cilium_host: Link UP Jan 29 16:29:27.914668 systemd-networkd[1817]: cilium_net: Link UP Jan 29 16:29:27.915387 systemd-networkd[1817]: cilium_net: Gained carrier Jan 29 16:29:27.918414 kubelet[2394]: E0129 16:29:27.918177 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:27.915910 systemd-networkd[1817]: cilium_host: Gained carrier Jan 29 16:29:27.919639 (udev-worker)[2788]: Network interface NamePolicy= disabled on kernel command line. Jan 29 16:29:27.921776 (udev-worker)[3092]: Network interface NamePolicy= disabled on kernel command line. Jan 29 16:29:28.088131 (udev-worker)[3120]: Network interface NamePolicy= disabled on kernel command line. Jan 29 16:29:28.113278 systemd-networkd[1817]: cilium_vxlan: Link UP Jan 29 16:29:28.113288 systemd-networkd[1817]: cilium_vxlan: Gained carrier Jan 29 16:29:28.118571 systemd-networkd[1817]: cilium_net: Gained IPv6LL Jan 29 16:29:28.196180 systemd[1]: Created slice kubepods-besteffort-pod43a18c39_4655_4eb3_ade6_cddb3c607b50.slice - libcontainer container kubepods-besteffort-pod43a18c39_4655_4eb3_ade6_cddb3c607b50.slice. Jan 29 16:29:28.240677 kubelet[2394]: I0129 16:29:28.240561 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjxp6\" (UniqueName: \"kubernetes.io/projected/43a18c39-4655-4eb3-ade6-cddb3c607b50-kube-api-access-mjxp6\") pod \"nginx-deployment-8587fbcb89-sclnl\" (UID: \"43a18c39-4655-4eb3-ade6-cddb3c607b50\") " pod="default/nginx-deployment-8587fbcb89-sclnl" Jan 29 16:29:28.246528 systemd-networkd[1817]: cilium_host: Gained IPv6LL Jan 29 16:29:28.520434 containerd[1903]: time="2025-01-29T16:29:28.515328699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-sclnl,Uid:43a18c39-4655-4eb3-ade6-cddb3c607b50,Namespace:default,Attempt:0,}" Jan 29 16:29:28.585236 kernel: NET: Registered PF_ALG protocol family Jan 29 16:29:28.884533 kubelet[2394]: E0129 16:29:28.884398 2394 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:28.924183 kubelet[2394]: E0129 16:29:28.918287 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:29.581852 systemd-networkd[1817]: cilium_vxlan: Gained IPv6LL Jan 29 16:29:29.713488 systemd-networkd[1817]: lxc_health: Link UP Jan 29 16:29:29.725591 systemd-networkd[1817]: lxc_health: Gained carrier Jan 29 16:29:29.919314 kubelet[2394]: E0129 16:29:29.919178 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:30.097463 systemd-networkd[1817]: lxc2c84886d7b02: Link UP Jan 29 16:29:30.106446 kernel: eth0: renamed from tmpe2055 Jan 29 16:29:30.116381 systemd-networkd[1817]: lxc2c84886d7b02: Gained carrier Jan 29 16:29:30.921426 kubelet[2394]: E0129 16:29:30.921364 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:31.053898 systemd-networkd[1817]: lxc_health: Gained IPv6LL Jan 29 16:29:31.922081 kubelet[2394]: E0129 16:29:31.922016 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:32.139424 systemd-networkd[1817]: lxc2c84886d7b02: Gained IPv6LL Jan 29 16:29:32.923167 kubelet[2394]: E0129 16:29:32.923077 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:33.924261 kubelet[2394]: E0129 16:29:33.924187 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:34.161639 ntpd[1880]: Listen normally on 7 cilium_host 192.168.1.45:123 Jan 29 16:29:34.350510 ntpd[1880]: 29 Jan 16:29:34 ntpd[1880]: Listen normally on 7 cilium_host 192.168.1.45:123 Jan 29 16:29:34.350510 ntpd[1880]: 29 Jan 16:29:34 ntpd[1880]: Listen normally on 8 cilium_net [fe80::f059:e4ff:fee7:e1f0%3]:123 Jan 29 16:29:34.350510 ntpd[1880]: 29 Jan 16:29:34 ntpd[1880]: Listen normally on 9 cilium_host [fe80::c05f:83ff:febf:e239%4]:123 Jan 29 16:29:34.350510 ntpd[1880]: 29 Jan 16:29:34 ntpd[1880]: Listen normally on 10 cilium_vxlan [fe80::749a:4aff:fe89:878d%5]:123 Jan 29 16:29:34.350510 ntpd[1880]: 29 Jan 16:29:34 ntpd[1880]: Listen normally on 11 lxc_health [fe80::9ce6:d5ff:fec5:d4ae%7]:123 Jan 29 16:29:34.350510 ntpd[1880]: 29 Jan 16:29:34 ntpd[1880]: Listen normally on 12 lxc2c84886d7b02 [fe80::5c7d:5ff:fe8c:6968%9]:123 Jan 29 16:29:34.161743 ntpd[1880]: Listen normally on 8 cilium_net [fe80::f059:e4ff:fee7:e1f0%3]:123 Jan 29 16:29:34.163965 ntpd[1880]: Listen normally on 9 cilium_host [fe80::c05f:83ff:febf:e239%4]:123 Jan 29 16:29:34.164103 ntpd[1880]: Listen normally on 10 cilium_vxlan [fe80::749a:4aff:fe89:878d%5]:123 Jan 29 16:29:34.164231 ntpd[1880]: Listen normally on 11 lxc_health [fe80::9ce6:d5ff:fec5:d4ae%7]:123 Jan 29 16:29:34.164312 ntpd[1880]: Listen normally on 12 lxc2c84886d7b02 [fe80::5c7d:5ff:fe8c:6968%9]:123 Jan 29 16:29:34.924495 kubelet[2394]: E0129 16:29:34.924424 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:35.446821 update_engine[1890]: I20250129 16:29:35.446593 1890 update_attempter.cc:509] Updating boot flags... Jan 29 16:29:35.582185 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3473) Jan 29 16:29:35.925277 kubelet[2394]: E0129 16:29:35.925244 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:36.299017 containerd[1903]: time="2025-01-29T16:29:36.298784409Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:29:36.299017 containerd[1903]: time="2025-01-29T16:29:36.298861053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:29:36.299017 containerd[1903]: time="2025-01-29T16:29:36.298883918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:29:36.299754 containerd[1903]: time="2025-01-29T16:29:36.299569672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:29:36.365995 systemd[1]: Started cri-containerd-e2055d96cbe959a1014b0aac0181fb756b684ca2f538353a669ccc4cc25bab5e.scope - libcontainer container e2055d96cbe959a1014b0aac0181fb756b684ca2f538353a669ccc4cc25bab5e. Jan 29 16:29:36.449510 containerd[1903]: time="2025-01-29T16:29:36.449430188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-sclnl,Uid:43a18c39-4655-4eb3-ade6-cddb3c607b50,Namespace:default,Attempt:0,} returns sandbox id \"e2055d96cbe959a1014b0aac0181fb756b684ca2f538353a669ccc4cc25bab5e\"" Jan 29 16:29:36.463742 containerd[1903]: time="2025-01-29T16:29:36.463701856Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 16:29:36.926775 kubelet[2394]: E0129 16:29:36.926714 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:37.927840 kubelet[2394]: E0129 16:29:37.927274 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:38.928030 kubelet[2394]: E0129 16:29:38.927975 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:39.929553 kubelet[2394]: E0129 16:29:39.929496 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:40.930217 kubelet[2394]: E0129 16:29:40.930065 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:41.235528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount765415032.mount: Deactivated successfully. Jan 29 16:29:41.932308 kubelet[2394]: E0129 16:29:41.931242 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:42.932274 kubelet[2394]: E0129 16:29:42.932182 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:42.978548 containerd[1903]: time="2025-01-29T16:29:42.978492012Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:29:42.986335 containerd[1903]: time="2025-01-29T16:29:42.983638985Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71015561" Jan 29 16:29:42.986335 containerd[1903]: time="2025-01-29T16:29:42.984101933Z" level=info msg="ImageCreate event name:\"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:29:43.000608 containerd[1903]: time="2025-01-29T16:29:43.000556483Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:29:43.007319 containerd[1903]: time="2025-01-29T16:29:43.007212133Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 6.543465047s" Jan 29 16:29:43.007319 containerd[1903]: time="2025-01-29T16:29:43.007322166Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 29 16:29:43.018605 containerd[1903]: time="2025-01-29T16:29:43.018491798Z" level=info msg="CreateContainer within sandbox \"e2055d96cbe959a1014b0aac0181fb756b684ca2f538353a669ccc4cc25bab5e\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 29 16:29:43.034914 containerd[1903]: time="2025-01-29T16:29:43.034860108Z" level=info msg="CreateContainer within sandbox \"e2055d96cbe959a1014b0aac0181fb756b684ca2f538353a669ccc4cc25bab5e\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"70e27dbb678f129ae84c1adff8d417f5670802e163437c1dfcb20fc38fb4f9cc\"" Jan 29 16:29:43.035564 containerd[1903]: time="2025-01-29T16:29:43.035531318Z" level=info msg="StartContainer for \"70e27dbb678f129ae84c1adff8d417f5670802e163437c1dfcb20fc38fb4f9cc\"" Jan 29 16:29:43.080532 systemd[1]: Started cri-containerd-70e27dbb678f129ae84c1adff8d417f5670802e163437c1dfcb20fc38fb4f9cc.scope - libcontainer container 70e27dbb678f129ae84c1adff8d417f5670802e163437c1dfcb20fc38fb4f9cc. Jan 29 16:29:43.122395 containerd[1903]: time="2025-01-29T16:29:43.122052020Z" level=info msg="StartContainer for \"70e27dbb678f129ae84c1adff8d417f5670802e163437c1dfcb20fc38fb4f9cc\" returns successfully" Jan 29 16:29:43.297572 kubelet[2394]: I0129 16:29:43.297416 2394 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-sclnl" podStartSLOduration=8.744747219 podStartE2EDuration="15.297402263s" podCreationTimestamp="2025-01-29 16:29:28 +0000 UTC" firstStartedPulling="2025-01-29 16:29:36.458866408 +0000 UTC m=+28.475013545" lastFinishedPulling="2025-01-29 16:29:43.011521446 +0000 UTC m=+35.027668589" observedRunningTime="2025-01-29 16:29:43.296655335 +0000 UTC m=+35.312802492" watchObservedRunningTime="2025-01-29 16:29:43.297402263 +0000 UTC m=+35.313549417" Jan 29 16:29:43.932678 kubelet[2394]: E0129 16:29:43.932560 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:44.932811 kubelet[2394]: E0129 16:29:44.932747 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:45.933508 kubelet[2394]: E0129 16:29:45.933455 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:46.934037 kubelet[2394]: E0129 16:29:46.933983 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:47.934488 kubelet[2394]: E0129 16:29:47.934425 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:48.884449 kubelet[2394]: E0129 16:29:48.884391 2394 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:48.935216 kubelet[2394]: E0129 16:29:48.935170 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:49.156079 systemd[1]: Created slice kubepods-besteffort-podc9940a99_85af_4b88_9c87_3a5d85193889.slice - libcontainer container kubepods-besteffort-podc9940a99_85af_4b88_9c87_3a5d85193889.slice. Jan 29 16:29:49.207125 kubelet[2394]: I0129 16:29:49.207004 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-794kh\" (UniqueName: \"kubernetes.io/projected/c9940a99-85af-4b88-9c87-3a5d85193889-kube-api-access-794kh\") pod \"nfs-server-provisioner-0\" (UID: \"c9940a99-85af-4b88-9c87-3a5d85193889\") " pod="default/nfs-server-provisioner-0" Jan 29 16:29:49.207309 kubelet[2394]: I0129 16:29:49.207185 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/c9940a99-85af-4b88-9c87-3a5d85193889-data\") pod \"nfs-server-provisioner-0\" (UID: \"c9940a99-85af-4b88-9c87-3a5d85193889\") " pod="default/nfs-server-provisioner-0" Jan 29 16:29:49.468986 containerd[1903]: time="2025-01-29T16:29:49.468575792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:c9940a99-85af-4b88-9c87-3a5d85193889,Namespace:default,Attempt:0,}" Jan 29 16:29:49.540186 kernel: eth0: renamed from tmp2ecc9 Jan 29 16:29:49.548304 systemd-networkd[1817]: lxca1fbdc43c985: Link UP Jan 29 16:29:49.549011 systemd-networkd[1817]: lxca1fbdc43c985: Gained carrier Jan 29 16:29:49.550926 (udev-worker)[3687]: Network interface NamePolicy= disabled on kernel command line. Jan 29 16:29:49.813082 containerd[1903]: time="2025-01-29T16:29:49.812781991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:29:49.813082 containerd[1903]: time="2025-01-29T16:29:49.812838695Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:29:49.814176 containerd[1903]: time="2025-01-29T16:29:49.812853291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:29:49.814500 containerd[1903]: time="2025-01-29T16:29:49.814169212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:29:49.847379 systemd[1]: Started cri-containerd-2ecc94a37a33d66644ef2ed941cb23bea99a079df53408f4860508c8969cba56.scope - libcontainer container 2ecc94a37a33d66644ef2ed941cb23bea99a079df53408f4860508c8969cba56. Jan 29 16:29:49.914856 containerd[1903]: time="2025-01-29T16:29:49.914798506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:c9940a99-85af-4b88-9c87-3a5d85193889,Namespace:default,Attempt:0,} returns sandbox id \"2ecc94a37a33d66644ef2ed941cb23bea99a079df53408f4860508c8969cba56\"" Jan 29 16:29:49.919107 containerd[1903]: time="2025-01-29T16:29:49.919060025Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 29 16:29:49.935780 kubelet[2394]: E0129 16:29:49.935732 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:50.938172 kubelet[2394]: E0129 16:29:50.936039 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:51.467779 systemd-networkd[1817]: lxca1fbdc43c985: Gained IPv6LL Jan 29 16:29:51.937011 kubelet[2394]: E0129 16:29:51.936960 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:52.822584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2211163545.mount: Deactivated successfully. Jan 29 16:29:52.937594 kubelet[2394]: E0129 16:29:52.937553 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:53.938708 kubelet[2394]: E0129 16:29:53.938637 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:54.161659 ntpd[1880]: Listen normally on 13 lxca1fbdc43c985 [fe80::5051:1cff:fef3:cae%11]:123 Jan 29 16:29:54.162185 ntpd[1880]: 29 Jan 16:29:54 ntpd[1880]: Listen normally on 13 lxca1fbdc43c985 [fe80::5051:1cff:fef3:cae%11]:123 Jan 29 16:29:54.939334 kubelet[2394]: E0129 16:29:54.939184 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:55.218103 containerd[1903]: time="2025-01-29T16:29:55.217964549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:29:55.219795 containerd[1903]: time="2025-01-29T16:29:55.219746080Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 29 16:29:55.221401 containerd[1903]: time="2025-01-29T16:29:55.221343712Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:29:55.225766 containerd[1903]: time="2025-01-29T16:29:55.225610095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:29:55.228854 containerd[1903]: time="2025-01-29T16:29:55.228688650Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 5.309587171s" Jan 29 16:29:55.228854 containerd[1903]: time="2025-01-29T16:29:55.228736862Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 29 16:29:55.231997 containerd[1903]: time="2025-01-29T16:29:55.231952528Z" level=info msg="CreateContainer within sandbox \"2ecc94a37a33d66644ef2ed941cb23bea99a079df53408f4860508c8969cba56\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 29 16:29:55.256390 containerd[1903]: time="2025-01-29T16:29:55.256320636Z" level=info msg="CreateContainer within sandbox \"2ecc94a37a33d66644ef2ed941cb23bea99a079df53408f4860508c8969cba56\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"94dd323638f0aba5c532a5addd4da45dacf67fc24bad7a5a1086f1a3ae636686\"" Jan 29 16:29:55.259595 containerd[1903]: time="2025-01-29T16:29:55.257858759Z" level=info msg="StartContainer for \"94dd323638f0aba5c532a5addd4da45dacf67fc24bad7a5a1086f1a3ae636686\"" Jan 29 16:29:55.322300 systemd[1]: Started cri-containerd-94dd323638f0aba5c532a5addd4da45dacf67fc24bad7a5a1086f1a3ae636686.scope - libcontainer container 94dd323638f0aba5c532a5addd4da45dacf67fc24bad7a5a1086f1a3ae636686. Jan 29 16:29:55.367917 containerd[1903]: time="2025-01-29T16:29:55.367855190Z" level=info msg="StartContainer for \"94dd323638f0aba5c532a5addd4da45dacf67fc24bad7a5a1086f1a3ae636686\" returns successfully" Jan 29 16:29:55.940112 kubelet[2394]: E0129 16:29:55.940052 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:56.356256 kubelet[2394]: I0129 16:29:56.356106 2394 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.043968793 podStartE2EDuration="7.356083678s" podCreationTimestamp="2025-01-29 16:29:49 +0000 UTC" firstStartedPulling="2025-01-29 16:29:49.918024124 +0000 UTC m=+41.934171261" lastFinishedPulling="2025-01-29 16:29:55.230139001 +0000 UTC m=+47.246286146" observedRunningTime="2025-01-29 16:29:56.355637076 +0000 UTC m=+48.371784233" watchObservedRunningTime="2025-01-29 16:29:56.356083678 +0000 UTC m=+48.372230834" Jan 29 16:29:56.941047 kubelet[2394]: E0129 16:29:56.940988 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:57.942178 kubelet[2394]: E0129 16:29:57.942111 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:58.942970 kubelet[2394]: E0129 16:29:58.942914 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:29:59.944032 kubelet[2394]: E0129 16:29:59.943978 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:00.945055 kubelet[2394]: E0129 16:30:00.944992 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:01.946122 kubelet[2394]: E0129 16:30:01.946061 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:02.947238 kubelet[2394]: E0129 16:30:02.947178 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:03.947375 kubelet[2394]: E0129 16:30:03.947302 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:04.947465 kubelet[2394]: E0129 16:30:04.947432 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:04.962395 systemd[1]: Created slice kubepods-besteffort-pod0a838dbd_c99c_49ce_b0f0_98df9d0621c0.slice - libcontainer container kubepods-besteffort-pod0a838dbd_c99c_49ce_b0f0_98df9d0621c0.slice. Jan 29 16:30:05.037964 kubelet[2394]: I0129 16:30:05.037827 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3b588fbc-c40c-45df-8b2d-a33a429e0bb7\" (UniqueName: \"kubernetes.io/nfs/0a838dbd-c99c-49ce-b0f0-98df9d0621c0-pvc-3b588fbc-c40c-45df-8b2d-a33a429e0bb7\") pod \"test-pod-1\" (UID: \"0a838dbd-c99c-49ce-b0f0-98df9d0621c0\") " pod="default/test-pod-1" Jan 29 16:30:05.037964 kubelet[2394]: I0129 16:30:05.037911 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvdf8\" (UniqueName: \"kubernetes.io/projected/0a838dbd-c99c-49ce-b0f0-98df9d0621c0-kube-api-access-hvdf8\") pod \"test-pod-1\" (UID: \"0a838dbd-c99c-49ce-b0f0-98df9d0621c0\") " pod="default/test-pod-1" Jan 29 16:30:05.187377 kernel: FS-Cache: Loaded Jan 29 16:30:05.319990 kernel: RPC: Registered named UNIX socket transport module. Jan 29 16:30:05.320120 kernel: RPC: Registered udp transport module. Jan 29 16:30:05.320177 kernel: RPC: Registered tcp transport module. Jan 29 16:30:05.321386 kernel: RPC: Registered tcp-with-tls transport module. Jan 29 16:30:05.321482 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 29 16:30:05.733184 kernel: NFS: Registering the id_resolver key type Jan 29 16:30:05.733364 kernel: Key type id_resolver registered Jan 29 16:30:05.733402 kernel: Key type id_legacy registered Jan 29 16:30:05.792171 nfsidmap[3872]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 29 16:30:05.796551 nfsidmap[3873]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 29 16:30:05.867934 containerd[1903]: time="2025-01-29T16:30:05.867876316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:0a838dbd-c99c-49ce-b0f0-98df9d0621c0,Namespace:default,Attempt:0,}" Jan 29 16:30:05.949718 systemd-networkd[1817]: lxc49d1c9981b2f: Link UP Jan 29 16:30:05.950917 kubelet[2394]: E0129 16:30:05.950060 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:05.960799 kernel: eth0: renamed from tmpdbbd6 Jan 29 16:30:05.966473 systemd-networkd[1817]: lxc49d1c9981b2f: Gained carrier Jan 29 16:30:05.967110 (udev-worker)[3869]: Network interface NamePolicy= disabled on kernel command line. Jan 29 16:30:06.540102 containerd[1903]: time="2025-01-29T16:30:06.539812291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:30:06.540492 containerd[1903]: time="2025-01-29T16:30:06.540207587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:30:06.541208 containerd[1903]: time="2025-01-29T16:30:06.541108266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:30:06.541429 containerd[1903]: time="2025-01-29T16:30:06.541325732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:30:06.572366 systemd[1]: Started cri-containerd-dbbd65436997480316992d15a554903eef267633317cc90788317793a76e15d3.scope - libcontainer container dbbd65436997480316992d15a554903eef267633317cc90788317793a76e15d3. Jan 29 16:30:06.678859 containerd[1903]: time="2025-01-29T16:30:06.678812215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:0a838dbd-c99c-49ce-b0f0-98df9d0621c0,Namespace:default,Attempt:0,} returns sandbox id \"dbbd65436997480316992d15a554903eef267633317cc90788317793a76e15d3\"" Jan 29 16:30:06.685865 containerd[1903]: time="2025-01-29T16:30:06.685440961Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 16:30:06.950740 kubelet[2394]: E0129 16:30:06.950674 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:07.056484 containerd[1903]: time="2025-01-29T16:30:07.056428655Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:30:07.059304 containerd[1903]: time="2025-01-29T16:30:07.058008412Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 29 16:30:07.064534 containerd[1903]: time="2025-01-29T16:30:07.064430920Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 378.953346ms" Jan 29 16:30:07.064534 containerd[1903]: time="2025-01-29T16:30:07.064519317Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 29 16:30:07.067594 containerd[1903]: time="2025-01-29T16:30:07.067506785Z" level=info msg="CreateContainer within sandbox \"dbbd65436997480316992d15a554903eef267633317cc90788317793a76e15d3\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 29 16:30:07.094379 containerd[1903]: time="2025-01-29T16:30:07.094334533Z" level=info msg="CreateContainer within sandbox \"dbbd65436997480316992d15a554903eef267633317cc90788317793a76e15d3\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"2cc3533c1e6fe18d2745b6bdbdf0c8298a2850a1ca809e6027b75bc78eb06d54\"" Jan 29 16:30:07.096739 containerd[1903]: time="2025-01-29T16:30:07.095291570Z" level=info msg="StartContainer for \"2cc3533c1e6fe18d2745b6bdbdf0c8298a2850a1ca809e6027b75bc78eb06d54\"" Jan 29 16:30:07.149513 systemd[1]: Started cri-containerd-2cc3533c1e6fe18d2745b6bdbdf0c8298a2850a1ca809e6027b75bc78eb06d54.scope - libcontainer container 2cc3533c1e6fe18d2745b6bdbdf0c8298a2850a1ca809e6027b75bc78eb06d54. Jan 29 16:30:07.212705 containerd[1903]: time="2025-01-29T16:30:07.212444774Z" level=info msg="StartContainer for \"2cc3533c1e6fe18d2745b6bdbdf0c8298a2850a1ca809e6027b75bc78eb06d54\" returns successfully" Jan 29 16:30:07.407589 kubelet[2394]: I0129 16:30:07.407497 2394 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.026311466 podStartE2EDuration="17.407480165s" podCreationTimestamp="2025-01-29 16:29:50 +0000 UTC" firstStartedPulling="2025-01-29 16:30:06.684199389 +0000 UTC m=+58.700346526" lastFinishedPulling="2025-01-29 16:30:07.065368078 +0000 UTC m=+59.081515225" observedRunningTime="2025-01-29 16:30:07.407297363 +0000 UTC m=+59.423444519" watchObservedRunningTime="2025-01-29 16:30:07.407480165 +0000 UTC m=+59.423627320" Jan 29 16:30:07.851595 systemd-networkd[1817]: lxc49d1c9981b2f: Gained IPv6LL Jan 29 16:30:07.950916 kubelet[2394]: E0129 16:30:07.950862 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:08.884175 kubelet[2394]: E0129 16:30:08.884118 2394 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:08.951605 kubelet[2394]: E0129 16:30:08.951557 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:09.890353 systemd[1]: run-containerd-runc-k8s.io-91a613218fe72be8d61246c9a4afc98b250c0b85db75188ffe81cccceaa92f21-runc.Ik0BDm.mount: Deactivated successfully. Jan 29 16:30:09.952365 kubelet[2394]: E0129 16:30:09.952314 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:10.057988 containerd[1903]: time="2025-01-29T16:30:10.057934042Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:30:10.160787 containerd[1903]: time="2025-01-29T16:30:10.160637344Z" level=info msg="StopContainer for \"91a613218fe72be8d61246c9a4afc98b250c0b85db75188ffe81cccceaa92f21\" with timeout 2 (s)" Jan 29 16:30:10.161257 containerd[1903]: time="2025-01-29T16:30:10.161206233Z" level=info msg="Stop container \"91a613218fe72be8d61246c9a4afc98b250c0b85db75188ffe81cccceaa92f21\" with signal terminated" Jan 29 16:30:10.161705 ntpd[1880]: Listen normally on 14 lxc49d1c9981b2f [fe80::7835:6cff:fe86:d699%13]:123 Jan 29 16:30:10.162126 ntpd[1880]: 29 Jan 16:30:10 ntpd[1880]: Listen normally on 14 lxc49d1c9981b2f [fe80::7835:6cff:fe86:d699%13]:123 Jan 29 16:30:10.209550 systemd-networkd[1817]: lxc_health: Link DOWN Jan 29 16:30:10.209562 systemd-networkd[1817]: lxc_health: Lost carrier Jan 29 16:30:10.278510 systemd[1]: cri-containerd-91a613218fe72be8d61246c9a4afc98b250c0b85db75188ffe81cccceaa92f21.scope: Deactivated successfully. Jan 29 16:30:10.279372 systemd[1]: cri-containerd-91a613218fe72be8d61246c9a4afc98b250c0b85db75188ffe81cccceaa92f21.scope: Consumed 8.753s CPU time, 125.3M memory peak, 872K read from disk, 29.4M written to disk. Jan 29 16:30:10.308906 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91a613218fe72be8d61246c9a4afc98b250c0b85db75188ffe81cccceaa92f21-rootfs.mount: Deactivated successfully. Jan 29 16:30:10.322434 containerd[1903]: time="2025-01-29T16:30:10.322364475Z" level=info msg="shim disconnected" id=91a613218fe72be8d61246c9a4afc98b250c0b85db75188ffe81cccceaa92f21 namespace=k8s.io Jan 29 16:30:10.322434 containerd[1903]: time="2025-01-29T16:30:10.322425578Z" level=warning msg="cleaning up after shim disconnected" id=91a613218fe72be8d61246c9a4afc98b250c0b85db75188ffe81cccceaa92f21 namespace=k8s.io Jan 29 16:30:10.322434 containerd[1903]: time="2025-01-29T16:30:10.322436822Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:30:10.343683 containerd[1903]: time="2025-01-29T16:30:10.343577237Z" level=info msg="StopContainer for \"91a613218fe72be8d61246c9a4afc98b250c0b85db75188ffe81cccceaa92f21\" returns successfully" Jan 29 16:30:10.345670 containerd[1903]: time="2025-01-29T16:30:10.345524148Z" level=info msg="StopPodSandbox for \"061962fca644e7ac2b370e599f489915fa52f33ecabd50788dd089ef8c79187a\"" Jan 29 16:30:10.352951 containerd[1903]: time="2025-01-29T16:30:10.345572204Z" level=info msg="Container to stop \"4e20339a22f043e811ff64d603be773cb4be1ce464d8b24b50e70e2e8547edb5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:30:10.352951 containerd[1903]: time="2025-01-29T16:30:10.352944868Z" level=info msg="Container to stop \"d6647a262d6d6cca9717ceba86243498b59da66efc83b494f92c8c5460f4fae8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:30:10.353253 containerd[1903]: time="2025-01-29T16:30:10.352974259Z" level=info msg="Container to stop \"178f0a3c82339ffb5c9ed72c16dfda07bfe3184334db1226c0675c5318fed39f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:30:10.353253 containerd[1903]: time="2025-01-29T16:30:10.352986886Z" level=info msg="Container to stop \"91a613218fe72be8d61246c9a4afc98b250c0b85db75188ffe81cccceaa92f21\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:30:10.353253 containerd[1903]: time="2025-01-29T16:30:10.352999707Z" level=info msg="Container to stop \"697d5ae59082fe39ca62126f030922452ab241219bfe2c5042ba5c133eba7c02\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:30:10.361371 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-061962fca644e7ac2b370e599f489915fa52f33ecabd50788dd089ef8c79187a-shm.mount: Deactivated successfully. Jan 29 16:30:10.368989 systemd[1]: cri-containerd-061962fca644e7ac2b370e599f489915fa52f33ecabd50788dd089ef8c79187a.scope: Deactivated successfully. Jan 29 16:30:10.407942 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-061962fca644e7ac2b370e599f489915fa52f33ecabd50788dd089ef8c79187a-rootfs.mount: Deactivated successfully. Jan 29 16:30:10.415813 containerd[1903]: time="2025-01-29T16:30:10.415629774Z" level=info msg="shim disconnected" id=061962fca644e7ac2b370e599f489915fa52f33ecabd50788dd089ef8c79187a namespace=k8s.io Jan 29 16:30:10.415813 containerd[1903]: time="2025-01-29T16:30:10.415696991Z" level=warning msg="cleaning up after shim disconnected" id=061962fca644e7ac2b370e599f489915fa52f33ecabd50788dd089ef8c79187a namespace=k8s.io Jan 29 16:30:10.415813 containerd[1903]: time="2025-01-29T16:30:10.415710718Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:30:10.436651 containerd[1903]: time="2025-01-29T16:30:10.436599963Z" level=info msg="TearDown network for sandbox \"061962fca644e7ac2b370e599f489915fa52f33ecabd50788dd089ef8c79187a\" successfully" Jan 29 16:30:10.436651 containerd[1903]: time="2025-01-29T16:30:10.436643230Z" level=info msg="StopPodSandbox for \"061962fca644e7ac2b370e599f489915fa52f33ecabd50788dd089ef8c79187a\" returns successfully" Jan 29 16:30:10.488174 kubelet[2394]: I0129 16:30:10.485413 2394 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-bpf-maps\") pod \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\" (UID: \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\") " Jan 29 16:30:10.488174 kubelet[2394]: I0129 16:30:10.485471 2394 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-cilium-config-path\") pod \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\" (UID: \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\") " Jan 29 16:30:10.488174 kubelet[2394]: I0129 16:30:10.485474 2394 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448" (UID: "8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:10.488174 kubelet[2394]: I0129 16:30:10.485497 2394 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-cni-path\") pod \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\" (UID: \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\") " Jan 29 16:30:10.488174 kubelet[2394]: I0129 16:30:10.485520 2394 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-etc-cni-netd\") pod \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\" (UID: \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\") " Jan 29 16:30:10.488174 kubelet[2394]: I0129 16:30:10.485545 2394 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-clustermesh-secrets\") pod \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\" (UID: \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\") " Jan 29 16:30:10.489298 kubelet[2394]: I0129 16:30:10.485573 2394 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-lib-modules\") pod \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\" (UID: \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\") " Jan 29 16:30:10.489298 kubelet[2394]: I0129 16:30:10.485595 2394 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-cilium-run\") pod \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\" (UID: \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\") " Jan 29 16:30:10.489298 kubelet[2394]: I0129 16:30:10.485617 2394 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-host-proc-sys-kernel\") pod \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\" (UID: \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\") " Jan 29 16:30:10.489298 kubelet[2394]: I0129 16:30:10.485638 2394 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-host-proc-sys-net\") pod \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\" (UID: \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\") " Jan 29 16:30:10.489298 kubelet[2394]: I0129 16:30:10.485661 2394 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-cilium-cgroup\") pod \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\" (UID: \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\") " Jan 29 16:30:10.489298 kubelet[2394]: I0129 16:30:10.485681 2394 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-xtables-lock\") pod \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\" (UID: \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\") " Jan 29 16:30:10.490384 kubelet[2394]: I0129 16:30:10.485702 2394 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-hostproc\") pod \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\" (UID: \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\") " Jan 29 16:30:10.490384 kubelet[2394]: I0129 16:30:10.485730 2394 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-hubble-tls\") pod \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\" (UID: \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\") " Jan 29 16:30:10.490384 kubelet[2394]: I0129 16:30:10.485759 2394 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pff7\" (UniqueName: \"kubernetes.io/projected/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-kube-api-access-4pff7\") pod \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\" (UID: \"8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448\") " Jan 29 16:30:10.490384 kubelet[2394]: I0129 16:30:10.485798 2394 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-bpf-maps\") on node \"172.31.31.0\" DevicePath \"\"" Jan 29 16:30:10.490949 kubelet[2394]: I0129 16:30:10.490908 2394 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448" (UID: "8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:10.491897 kubelet[2394]: I0129 16:30:10.490906 2394 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448" (UID: "8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:10.491897 kubelet[2394]: I0129 16:30:10.490982 2394 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448" (UID: "8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:10.492246 kubelet[2394]: I0129 16:30:10.492169 2394 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-cni-path" (OuterVolumeSpecName: "cni-path") pod "8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448" (UID: "8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:10.492608 kubelet[2394]: I0129 16:30:10.492536 2394 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448" (UID: "8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:10.493232 kubelet[2394]: I0129 16:30:10.493205 2394 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448" (UID: "8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:10.493305 kubelet[2394]: I0129 16:30:10.493252 2394 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-hostproc" (OuterVolumeSpecName: "hostproc") pod "8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448" (UID: "8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:10.501863 kubelet[2394]: I0129 16:30:10.501809 2394 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-kube-api-access-4pff7" (OuterVolumeSpecName: "kube-api-access-4pff7") pod "8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448" (UID: "8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448"). InnerVolumeSpecName "kube-api-access-4pff7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:10.502014 kubelet[2394]: I0129 16:30:10.501914 2394 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448" (UID: "8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:10.502014 kubelet[2394]: I0129 16:30:10.501945 2394 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448" (UID: "8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:30:10.502866 kubelet[2394]: I0129 16:30:10.502771 2394 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448" (UID: "8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:30:10.506905 kubelet[2394]: I0129 16:30:10.506848 2394 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448" (UID: "8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:30:10.511418 kubelet[2394]: I0129 16:30:10.511364 2394 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448" (UID: "8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:30:10.586872 kubelet[2394]: I0129 16:30:10.586820 2394 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-cilium-config-path\") on node \"172.31.31.0\" DevicePath \"\"" Jan 29 16:30:10.586872 kubelet[2394]: I0129 16:30:10.586857 2394 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-cni-path\") on node \"172.31.31.0\" DevicePath \"\"" Jan 29 16:30:10.586872 kubelet[2394]: I0129 16:30:10.586871 2394 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-etc-cni-netd\") on node \"172.31.31.0\" DevicePath \"\"" Jan 29 16:30:10.586872 kubelet[2394]: I0129 16:30:10.586882 2394 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-clustermesh-secrets\") on node \"172.31.31.0\" DevicePath \"\"" Jan 29 16:30:10.587296 kubelet[2394]: I0129 16:30:10.586893 2394 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-lib-modules\") on node \"172.31.31.0\" DevicePath \"\"" Jan 29 16:30:10.587296 kubelet[2394]: I0129 16:30:10.586906 2394 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-cilium-run\") on node \"172.31.31.0\" DevicePath \"\"" Jan 29 16:30:10.587296 kubelet[2394]: I0129 16:30:10.586916 2394 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-host-proc-sys-kernel\") on node \"172.31.31.0\" DevicePath \"\"" Jan 29 16:30:10.587296 kubelet[2394]: I0129 16:30:10.586953 2394 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-host-proc-sys-net\") on node \"172.31.31.0\" DevicePath \"\"" Jan 29 16:30:10.587296 kubelet[2394]: I0129 16:30:10.586965 2394 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-cilium-cgroup\") on node \"172.31.31.0\" DevicePath \"\"" Jan 29 16:30:10.587296 kubelet[2394]: I0129 16:30:10.586977 2394 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-xtables-lock\") on node \"172.31.31.0\" DevicePath \"\"" Jan 29 16:30:10.587296 kubelet[2394]: I0129 16:30:10.586988 2394 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-hostproc\") on node \"172.31.31.0\" DevicePath \"\"" Jan 29 16:30:10.587296 kubelet[2394]: I0129 16:30:10.586998 2394 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-hubble-tls\") on node \"172.31.31.0\" DevicePath \"\"" Jan 29 16:30:10.587499 kubelet[2394]: I0129 16:30:10.587010 2394 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-4pff7\" (UniqueName: \"kubernetes.io/projected/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448-kube-api-access-4pff7\") on node \"172.31.31.0\" DevicePath \"\"" Jan 29 16:30:10.883360 systemd[1]: var-lib-kubelet-pods-8b0e7eaf\x2d80d3\x2d4dfa\x2d8e7b\x2d99c4fc7e5448-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4pff7.mount: Deactivated successfully. Jan 29 16:30:10.883514 systemd[1]: var-lib-kubelet-pods-8b0e7eaf\x2d80d3\x2d4dfa\x2d8e7b\x2d99c4fc7e5448-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 16:30:10.883607 systemd[1]: var-lib-kubelet-pods-8b0e7eaf\x2d80d3\x2d4dfa\x2d8e7b\x2d99c4fc7e5448-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 16:30:10.953520 kubelet[2394]: E0129 16:30:10.953467 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:11.152974 systemd[1]: Removed slice kubepods-burstable-pod8b0e7eaf_80d3_4dfa_8e7b_99c4fc7e5448.slice - libcontainer container kubepods-burstable-pod8b0e7eaf_80d3_4dfa_8e7b_99c4fc7e5448.slice. Jan 29 16:30:11.153132 systemd[1]: kubepods-burstable-pod8b0e7eaf_80d3_4dfa_8e7b_99c4fc7e5448.slice: Consumed 8.858s CPU time, 125.6M memory peak, 872K read from disk, 29.4M written to disk. Jan 29 16:30:11.413301 kubelet[2394]: I0129 16:30:11.413183 2394 scope.go:117] "RemoveContainer" containerID="91a613218fe72be8d61246c9a4afc98b250c0b85db75188ffe81cccceaa92f21" Jan 29 16:30:11.416803 containerd[1903]: time="2025-01-29T16:30:11.416743480Z" level=info msg="RemoveContainer for \"91a613218fe72be8d61246c9a4afc98b250c0b85db75188ffe81cccceaa92f21\"" Jan 29 16:30:11.431291 containerd[1903]: time="2025-01-29T16:30:11.431244084Z" level=info msg="RemoveContainer for \"91a613218fe72be8d61246c9a4afc98b250c0b85db75188ffe81cccceaa92f21\" returns successfully" Jan 29 16:30:11.431600 kubelet[2394]: I0129 16:30:11.431569 2394 scope.go:117] "RemoveContainer" containerID="178f0a3c82339ffb5c9ed72c16dfda07bfe3184334db1226c0675c5318fed39f" Jan 29 16:30:11.433115 containerd[1903]: time="2025-01-29T16:30:11.433082906Z" level=info msg="RemoveContainer for \"178f0a3c82339ffb5c9ed72c16dfda07bfe3184334db1226c0675c5318fed39f\"" Jan 29 16:30:11.439728 containerd[1903]: time="2025-01-29T16:30:11.439688407Z" level=info msg="RemoveContainer for \"178f0a3c82339ffb5c9ed72c16dfda07bfe3184334db1226c0675c5318fed39f\" returns successfully" Jan 29 16:30:11.440050 kubelet[2394]: I0129 16:30:11.440017 2394 scope.go:117] "RemoveContainer" containerID="697d5ae59082fe39ca62126f030922452ab241219bfe2c5042ba5c133eba7c02" Jan 29 16:30:11.441558 containerd[1903]: time="2025-01-29T16:30:11.441526026Z" level=info msg="RemoveContainer for \"697d5ae59082fe39ca62126f030922452ab241219bfe2c5042ba5c133eba7c02\"" Jan 29 16:30:11.446615 containerd[1903]: time="2025-01-29T16:30:11.446574100Z" level=info msg="RemoveContainer for \"697d5ae59082fe39ca62126f030922452ab241219bfe2c5042ba5c133eba7c02\" returns successfully" Jan 29 16:30:11.446952 kubelet[2394]: I0129 16:30:11.446900 2394 scope.go:117] "RemoveContainer" containerID="d6647a262d6d6cca9717ceba86243498b59da66efc83b494f92c8c5460f4fae8" Jan 29 16:30:11.448122 containerd[1903]: time="2025-01-29T16:30:11.448079025Z" level=info msg="RemoveContainer for \"d6647a262d6d6cca9717ceba86243498b59da66efc83b494f92c8c5460f4fae8\"" Jan 29 16:30:11.451973 containerd[1903]: time="2025-01-29T16:30:11.451936884Z" level=info msg="RemoveContainer for \"d6647a262d6d6cca9717ceba86243498b59da66efc83b494f92c8c5460f4fae8\" returns successfully" Jan 29 16:30:11.452336 kubelet[2394]: I0129 16:30:11.452314 2394 scope.go:117] "RemoveContainer" containerID="4e20339a22f043e811ff64d603be773cb4be1ce464d8b24b50e70e2e8547edb5" Jan 29 16:30:11.453885 containerd[1903]: time="2025-01-29T16:30:11.453849970Z" level=info msg="RemoveContainer for \"4e20339a22f043e811ff64d603be773cb4be1ce464d8b24b50e70e2e8547edb5\"" Jan 29 16:30:11.457847 containerd[1903]: time="2025-01-29T16:30:11.457801727Z" level=info msg="RemoveContainer for \"4e20339a22f043e811ff64d603be773cb4be1ce464d8b24b50e70e2e8547edb5\" returns successfully" Jan 29 16:30:11.953972 kubelet[2394]: E0129 16:30:11.953918 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:12.954841 kubelet[2394]: E0129 16:30:12.954631 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:13.150660 kubelet[2394]: I0129 16:30:13.150616 2394 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448" path="/var/lib/kubelet/pods/8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448/volumes" Jan 29 16:30:13.161685 ntpd[1880]: Deleting interface #11 lxc_health, fe80::9ce6:d5ff:fec5:d4ae%7#123, interface stats: received=0, sent=0, dropped=0, active_time=39 secs Jan 29 16:30:13.162061 ntpd[1880]: 29 Jan 16:30:13 ntpd[1880]: Deleting interface #11 lxc_health, fe80::9ce6:d5ff:fec5:d4ae%7#123, interface stats: received=0, sent=0, dropped=0, active_time=39 secs Jan 29 16:30:13.356702 kubelet[2394]: E0129 16:30:13.356608 2394 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448" containerName="mount-cgroup" Jan 29 16:30:13.356702 kubelet[2394]: E0129 16:30:13.356687 2394 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448" containerName="cilium-agent" Jan 29 16:30:13.356702 kubelet[2394]: E0129 16:30:13.356699 2394 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448" containerName="apply-sysctl-overwrites" Jan 29 16:30:13.356702 kubelet[2394]: E0129 16:30:13.356707 2394 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448" containerName="mount-bpf-fs" Jan 29 16:30:13.357495 kubelet[2394]: E0129 16:30:13.356721 2394 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448" containerName="clean-cilium-state" Jan 29 16:30:13.357495 kubelet[2394]: I0129 16:30:13.356750 2394 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b0e7eaf-80d3-4dfa-8e7b-99c4fc7e5448" containerName="cilium-agent" Jan 29 16:30:13.373405 systemd[1]: Created slice kubepods-besteffort-pod07e7ea0d_8eba_4747_bf19_ae4ecbca7abe.slice - libcontainer container kubepods-besteffort-pod07e7ea0d_8eba_4747_bf19_ae4ecbca7abe.slice. Jan 29 16:30:13.425479 systemd[1]: Created slice kubepods-burstable-pod8b2b6733_f416_4148_91bc_57cea22492c9.slice - libcontainer container kubepods-burstable-pod8b2b6733_f416_4148_91bc_57cea22492c9.slice. Jan 29 16:30:13.501607 kubelet[2394]: I0129 16:30:13.501559 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/07e7ea0d-8eba-4747-bf19-ae4ecbca7abe-cilium-config-path\") pod \"cilium-operator-5d85765b45-ftm6v\" (UID: \"07e7ea0d-8eba-4747-bf19-ae4ecbca7abe\") " pod="kube-system/cilium-operator-5d85765b45-ftm6v" Jan 29 16:30:13.501797 kubelet[2394]: I0129 16:30:13.501617 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv4tf\" (UniqueName: \"kubernetes.io/projected/07e7ea0d-8eba-4747-bf19-ae4ecbca7abe-kube-api-access-xv4tf\") pod \"cilium-operator-5d85765b45-ftm6v\" (UID: \"07e7ea0d-8eba-4747-bf19-ae4ecbca7abe\") " pod="kube-system/cilium-operator-5d85765b45-ftm6v" Jan 29 16:30:13.602415 kubelet[2394]: I0129 16:30:13.601918 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8b2b6733-f416-4148-91bc-57cea22492c9-cilium-config-path\") pod \"cilium-75nbm\" (UID: \"8b2b6733-f416-4148-91bc-57cea22492c9\") " pod="kube-system/cilium-75nbm" Jan 29 16:30:13.602415 kubelet[2394]: I0129 16:30:13.601975 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8b2b6733-f416-4148-91bc-57cea22492c9-host-proc-sys-kernel\") pod \"cilium-75nbm\" (UID: \"8b2b6733-f416-4148-91bc-57cea22492c9\") " pod="kube-system/cilium-75nbm" Jan 29 16:30:13.602415 kubelet[2394]: I0129 16:30:13.602002 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8b2b6733-f416-4148-91bc-57cea22492c9-hubble-tls\") pod \"cilium-75nbm\" (UID: \"8b2b6733-f416-4148-91bc-57cea22492c9\") " pod="kube-system/cilium-75nbm" Jan 29 16:30:13.602415 kubelet[2394]: I0129 16:30:13.602026 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8b2b6733-f416-4148-91bc-57cea22492c9-xtables-lock\") pod \"cilium-75nbm\" (UID: \"8b2b6733-f416-4148-91bc-57cea22492c9\") " pod="kube-system/cilium-75nbm" Jan 29 16:30:13.602415 kubelet[2394]: I0129 16:30:13.602046 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8b2b6733-f416-4148-91bc-57cea22492c9-host-proc-sys-net\") pod \"cilium-75nbm\" (UID: \"8b2b6733-f416-4148-91bc-57cea22492c9\") " pod="kube-system/cilium-75nbm" Jan 29 16:30:13.602415 kubelet[2394]: I0129 16:30:13.602068 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8b2b6733-f416-4148-91bc-57cea22492c9-cilium-run\") pod \"cilium-75nbm\" (UID: \"8b2b6733-f416-4148-91bc-57cea22492c9\") " pod="kube-system/cilium-75nbm" Jan 29 16:30:13.602802 kubelet[2394]: I0129 16:30:13.602090 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8b2b6733-f416-4148-91bc-57cea22492c9-hostproc\") pod \"cilium-75nbm\" (UID: \"8b2b6733-f416-4148-91bc-57cea22492c9\") " pod="kube-system/cilium-75nbm" Jan 29 16:30:13.602802 kubelet[2394]: I0129 16:30:13.602110 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8b2b6733-f416-4148-91bc-57cea22492c9-cilium-cgroup\") pod \"cilium-75nbm\" (UID: \"8b2b6733-f416-4148-91bc-57cea22492c9\") " pod="kube-system/cilium-75nbm" Jan 29 16:30:13.602802 kubelet[2394]: I0129 16:30:13.602132 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8b2b6733-f416-4148-91bc-57cea22492c9-cni-path\") pod \"cilium-75nbm\" (UID: \"8b2b6733-f416-4148-91bc-57cea22492c9\") " pod="kube-system/cilium-75nbm" Jan 29 16:30:13.602802 kubelet[2394]: I0129 16:30:13.602183 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8b2b6733-f416-4148-91bc-57cea22492c9-etc-cni-netd\") pod \"cilium-75nbm\" (UID: \"8b2b6733-f416-4148-91bc-57cea22492c9\") " pod="kube-system/cilium-75nbm" Jan 29 16:30:13.602802 kubelet[2394]: I0129 16:30:13.602220 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8b2b6733-f416-4148-91bc-57cea22492c9-cilium-ipsec-secrets\") pod \"cilium-75nbm\" (UID: \"8b2b6733-f416-4148-91bc-57cea22492c9\") " pod="kube-system/cilium-75nbm" Jan 29 16:30:13.602802 kubelet[2394]: I0129 16:30:13.602242 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fwvk\" (UniqueName: \"kubernetes.io/projected/8b2b6733-f416-4148-91bc-57cea22492c9-kube-api-access-6fwvk\") pod \"cilium-75nbm\" (UID: \"8b2b6733-f416-4148-91bc-57cea22492c9\") " pod="kube-system/cilium-75nbm" Jan 29 16:30:13.603063 kubelet[2394]: I0129 16:30:13.602266 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8b2b6733-f416-4148-91bc-57cea22492c9-bpf-maps\") pod \"cilium-75nbm\" (UID: \"8b2b6733-f416-4148-91bc-57cea22492c9\") " pod="kube-system/cilium-75nbm" Jan 29 16:30:13.603063 kubelet[2394]: I0129 16:30:13.602287 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b2b6733-f416-4148-91bc-57cea22492c9-lib-modules\") pod \"cilium-75nbm\" (UID: \"8b2b6733-f416-4148-91bc-57cea22492c9\") " pod="kube-system/cilium-75nbm" Jan 29 16:30:13.603063 kubelet[2394]: I0129 16:30:13.602308 2394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8b2b6733-f416-4148-91bc-57cea22492c9-clustermesh-secrets\") pod \"cilium-75nbm\" (UID: \"8b2b6733-f416-4148-91bc-57cea22492c9\") " pod="kube-system/cilium-75nbm" Jan 29 16:30:13.683545 containerd[1903]: time="2025-01-29T16:30:13.683287492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-ftm6v,Uid:07e7ea0d-8eba-4747-bf19-ae4ecbca7abe,Namespace:kube-system,Attempt:0,}" Jan 29 16:30:13.761942 containerd[1903]: time="2025-01-29T16:30:13.761142709Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:30:13.761942 containerd[1903]: time="2025-01-29T16:30:13.761226980Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:30:13.761942 containerd[1903]: time="2025-01-29T16:30:13.761239221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:30:13.761942 containerd[1903]: time="2025-01-29T16:30:13.761331966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:30:13.791396 systemd[1]: Started cri-containerd-c90a1885bf08a3291aaafe6aee9866fdc251b8bfca20b23915850cf4de3ebfe7.scope - libcontainer container c90a1885bf08a3291aaafe6aee9866fdc251b8bfca20b23915850cf4de3ebfe7. Jan 29 16:30:13.852010 containerd[1903]: time="2025-01-29T16:30:13.851968682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-ftm6v,Uid:07e7ea0d-8eba-4747-bf19-ae4ecbca7abe,Namespace:kube-system,Attempt:0,} returns sandbox id \"c90a1885bf08a3291aaafe6aee9866fdc251b8bfca20b23915850cf4de3ebfe7\"" Jan 29 16:30:13.854480 containerd[1903]: time="2025-01-29T16:30:13.854447100Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 16:30:13.956118 kubelet[2394]: E0129 16:30:13.955885 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:14.051839 containerd[1903]: time="2025-01-29T16:30:14.051302201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-75nbm,Uid:8b2b6733-f416-4148-91bc-57cea22492c9,Namespace:kube-system,Attempt:0,}" Jan 29 16:30:14.104635 containerd[1903]: time="2025-01-29T16:30:14.104402436Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:30:14.104635 containerd[1903]: time="2025-01-29T16:30:14.104472078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:30:14.104635 containerd[1903]: time="2025-01-29T16:30:14.104557778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:30:14.105513 containerd[1903]: time="2025-01-29T16:30:14.105353572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:30:14.108774 kubelet[2394]: E0129 16:30:14.108732 2394 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 16:30:14.132390 systemd[1]: Started cri-containerd-73cee41d02052ccdc771f6a30fd5fca5589249d232c45f4f9b1f41c1259d04f6.scope - libcontainer container 73cee41d02052ccdc771f6a30fd5fca5589249d232c45f4f9b1f41c1259d04f6. Jan 29 16:30:14.170680 containerd[1903]: time="2025-01-29T16:30:14.170568529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-75nbm,Uid:8b2b6733-f416-4148-91bc-57cea22492c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"73cee41d02052ccdc771f6a30fd5fca5589249d232c45f4f9b1f41c1259d04f6\"" Jan 29 16:30:14.175533 containerd[1903]: time="2025-01-29T16:30:14.175472439Z" level=info msg="CreateContainer within sandbox \"73cee41d02052ccdc771f6a30fd5fca5589249d232c45f4f9b1f41c1259d04f6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 16:30:14.194865 containerd[1903]: time="2025-01-29T16:30:14.194740690Z" level=info msg="CreateContainer within sandbox \"73cee41d02052ccdc771f6a30fd5fca5589249d232c45f4f9b1f41c1259d04f6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"86abbd97ea9c3aa0fc01bf21e94aaa5a359ceb6b4917488e9ef54a33c1522631\"" Jan 29 16:30:14.195485 containerd[1903]: time="2025-01-29T16:30:14.195451277Z" level=info msg="StartContainer for \"86abbd97ea9c3aa0fc01bf21e94aaa5a359ceb6b4917488e9ef54a33c1522631\"" Jan 29 16:30:14.231379 systemd[1]: Started cri-containerd-86abbd97ea9c3aa0fc01bf21e94aaa5a359ceb6b4917488e9ef54a33c1522631.scope - libcontainer container 86abbd97ea9c3aa0fc01bf21e94aaa5a359ceb6b4917488e9ef54a33c1522631. Jan 29 16:30:14.269510 containerd[1903]: time="2025-01-29T16:30:14.269445580Z" level=info msg="StartContainer for \"86abbd97ea9c3aa0fc01bf21e94aaa5a359ceb6b4917488e9ef54a33c1522631\" returns successfully" Jan 29 16:30:14.343875 systemd[1]: cri-containerd-86abbd97ea9c3aa0fc01bf21e94aaa5a359ceb6b4917488e9ef54a33c1522631.scope: Deactivated successfully. Jan 29 16:30:14.344769 systemd[1]: cri-containerd-86abbd97ea9c3aa0fc01bf21e94aaa5a359ceb6b4917488e9ef54a33c1522631.scope: Consumed 22ms CPU time, 9.8M memory peak, 3.3M read from disk. Jan 29 16:30:14.420030 containerd[1903]: time="2025-01-29T16:30:14.419968699Z" level=info msg="shim disconnected" id=86abbd97ea9c3aa0fc01bf21e94aaa5a359ceb6b4917488e9ef54a33c1522631 namespace=k8s.io Jan 29 16:30:14.421375 containerd[1903]: time="2025-01-29T16:30:14.420622360Z" level=warning msg="cleaning up after shim disconnected" id=86abbd97ea9c3aa0fc01bf21e94aaa5a359ceb6b4917488e9ef54a33c1522631 namespace=k8s.io Jan 29 16:30:14.421375 containerd[1903]: time="2025-01-29T16:30:14.420760603Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:30:14.956608 kubelet[2394]: E0129 16:30:14.956543 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:15.440501 containerd[1903]: time="2025-01-29T16:30:15.440279339Z" level=info msg="CreateContainer within sandbox \"73cee41d02052ccdc771f6a30fd5fca5589249d232c45f4f9b1f41c1259d04f6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 16:30:15.468736 containerd[1903]: time="2025-01-29T16:30:15.468677328Z" level=info msg="CreateContainer within sandbox \"73cee41d02052ccdc771f6a30fd5fca5589249d232c45f4f9b1f41c1259d04f6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d004f0b2f29e3390ba241a6c9f665cd69be5c98af61da5104db7f65755857781\"" Jan 29 16:30:15.471628 containerd[1903]: time="2025-01-29T16:30:15.470195326Z" level=info msg="StartContainer for \"d004f0b2f29e3390ba241a6c9f665cd69be5c98af61da5104db7f65755857781\"" Jan 29 16:30:15.536586 systemd[1]: run-containerd-runc-k8s.io-d004f0b2f29e3390ba241a6c9f665cd69be5c98af61da5104db7f65755857781-runc.hYeyUh.mount: Deactivated successfully. Jan 29 16:30:15.545388 systemd[1]: Started cri-containerd-d004f0b2f29e3390ba241a6c9f665cd69be5c98af61da5104db7f65755857781.scope - libcontainer container d004f0b2f29e3390ba241a6c9f665cd69be5c98af61da5104db7f65755857781. Jan 29 16:30:15.641633 containerd[1903]: time="2025-01-29T16:30:15.641581763Z" level=info msg="StartContainer for \"d004f0b2f29e3390ba241a6c9f665cd69be5c98af61da5104db7f65755857781\" returns successfully" Jan 29 16:30:15.673830 systemd[1]: cri-containerd-d004f0b2f29e3390ba241a6c9f665cd69be5c98af61da5104db7f65755857781.scope: Deactivated successfully. Jan 29 16:30:15.674201 systemd[1]: cri-containerd-d004f0b2f29e3390ba241a6c9f665cd69be5c98af61da5104db7f65755857781.scope: Consumed 21ms CPU time, 7.4M memory peak, 2.2M read from disk. Jan 29 16:30:15.746083 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d004f0b2f29e3390ba241a6c9f665cd69be5c98af61da5104db7f65755857781-rootfs.mount: Deactivated successfully. Jan 29 16:30:15.765174 containerd[1903]: time="2025-01-29T16:30:15.765093578Z" level=info msg="shim disconnected" id=d004f0b2f29e3390ba241a6c9f665cd69be5c98af61da5104db7f65755857781 namespace=k8s.io Jan 29 16:30:15.765652 containerd[1903]: time="2025-01-29T16:30:15.765443039Z" level=warning msg="cleaning up after shim disconnected" id=d004f0b2f29e3390ba241a6c9f665cd69be5c98af61da5104db7f65755857781 namespace=k8s.io Jan 29 16:30:15.765652 containerd[1903]: time="2025-01-29T16:30:15.765467507Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:30:15.955686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3489964825.mount: Deactivated successfully. Jan 29 16:30:15.957315 kubelet[2394]: E0129 16:30:15.957273 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:16.445295 containerd[1903]: time="2025-01-29T16:30:16.445241442Z" level=info msg="CreateContainer within sandbox \"73cee41d02052ccdc771f6a30fd5fca5589249d232c45f4f9b1f41c1259d04f6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 16:30:16.480100 containerd[1903]: time="2025-01-29T16:30:16.480049177Z" level=info msg="CreateContainer within sandbox \"73cee41d02052ccdc771f6a30fd5fca5589249d232c45f4f9b1f41c1259d04f6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bedc62372d5293f34981b82a8fc867174c77b8c5ead39bece0054bc55568aeee\"" Jan 29 16:30:16.480976 containerd[1903]: time="2025-01-29T16:30:16.480876684Z" level=info msg="StartContainer for \"bedc62372d5293f34981b82a8fc867174c77b8c5ead39bece0054bc55568aeee\"" Jan 29 16:30:16.526639 systemd[1]: Started cri-containerd-bedc62372d5293f34981b82a8fc867174c77b8c5ead39bece0054bc55568aeee.scope - libcontainer container bedc62372d5293f34981b82a8fc867174c77b8c5ead39bece0054bc55568aeee. Jan 29 16:30:16.577644 containerd[1903]: time="2025-01-29T16:30:16.577269604Z" level=info msg="StartContainer for \"bedc62372d5293f34981b82a8fc867174c77b8c5ead39bece0054bc55568aeee\" returns successfully" Jan 29 16:30:16.879710 systemd[1]: cri-containerd-bedc62372d5293f34981b82a8fc867174c77b8c5ead39bece0054bc55568aeee.scope: Deactivated successfully. Jan 29 16:30:16.952633 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bedc62372d5293f34981b82a8fc867174c77b8c5ead39bece0054bc55568aeee-rootfs.mount: Deactivated successfully. Jan 29 16:30:16.960377 kubelet[2394]: E0129 16:30:16.960326 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:16.973966 containerd[1903]: time="2025-01-29T16:30:16.973254877Z" level=info msg="shim disconnected" id=bedc62372d5293f34981b82a8fc867174c77b8c5ead39bece0054bc55568aeee namespace=k8s.io Jan 29 16:30:16.973966 containerd[1903]: time="2025-01-29T16:30:16.973370575Z" level=warning msg="cleaning up after shim disconnected" id=bedc62372d5293f34981b82a8fc867174c77b8c5ead39bece0054bc55568aeee namespace=k8s.io Jan 29 16:30:16.973966 containerd[1903]: time="2025-01-29T16:30:16.973389829Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:30:17.460326 containerd[1903]: time="2025-01-29T16:30:17.459001717Z" level=info msg="CreateContainer within sandbox \"73cee41d02052ccdc771f6a30fd5fca5589249d232c45f4f9b1f41c1259d04f6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 16:30:17.491143 containerd[1903]: time="2025-01-29T16:30:17.491094112Z" level=info msg="CreateContainer within sandbox \"73cee41d02052ccdc771f6a30fd5fca5589249d232c45f4f9b1f41c1259d04f6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"14a26a319ab56bbd6b955e165f9bc583a1c99a842be7c4097ceab97729d517d9\"" Jan 29 16:30:17.492390 containerd[1903]: time="2025-01-29T16:30:17.492357226Z" level=info msg="StartContainer for \"14a26a319ab56bbd6b955e165f9bc583a1c99a842be7c4097ceab97729d517d9\"" Jan 29 16:30:17.606378 systemd[1]: Started cri-containerd-14a26a319ab56bbd6b955e165f9bc583a1c99a842be7c4097ceab97729d517d9.scope - libcontainer container 14a26a319ab56bbd6b955e165f9bc583a1c99a842be7c4097ceab97729d517d9. Jan 29 16:30:17.720402 systemd[1]: cri-containerd-14a26a319ab56bbd6b955e165f9bc583a1c99a842be7c4097ceab97729d517d9.scope: Deactivated successfully. Jan 29 16:30:17.722579 containerd[1903]: time="2025-01-29T16:30:17.722542447Z" level=info msg="StartContainer for \"14a26a319ab56bbd6b955e165f9bc583a1c99a842be7c4097ceab97729d517d9\" returns successfully" Jan 29 16:30:17.776597 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14a26a319ab56bbd6b955e165f9bc583a1c99a842be7c4097ceab97729d517d9-rootfs.mount: Deactivated successfully. Jan 29 16:30:17.789040 containerd[1903]: time="2025-01-29T16:30:17.788873999Z" level=info msg="shim disconnected" id=14a26a319ab56bbd6b955e165f9bc583a1c99a842be7c4097ceab97729d517d9 namespace=k8s.io Jan 29 16:30:17.789040 containerd[1903]: time="2025-01-29T16:30:17.788944734Z" level=warning msg="cleaning up after shim disconnected" id=14a26a319ab56bbd6b955e165f9bc583a1c99a842be7c4097ceab97729d517d9 namespace=k8s.io Jan 29 16:30:17.789040 containerd[1903]: time="2025-01-29T16:30:17.788956796Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:30:17.823749 containerd[1903]: time="2025-01-29T16:30:17.823348152Z" level=warning msg="cleanup warnings time=\"2025-01-29T16:30:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 16:30:17.961048 kubelet[2394]: E0129 16:30:17.960982 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:18.387792 containerd[1903]: time="2025-01-29T16:30:18.387740823Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:30:18.388835 containerd[1903]: time="2025-01-29T16:30:18.388643427Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 29 16:30:18.392176 containerd[1903]: time="2025-01-29T16:30:18.390192732Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:30:18.392176 containerd[1903]: time="2025-01-29T16:30:18.392126645Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.537468608s" Jan 29 16:30:18.392661 containerd[1903]: time="2025-01-29T16:30:18.392622601Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 29 16:30:18.395334 containerd[1903]: time="2025-01-29T16:30:18.395300060Z" level=info msg="CreateContainer within sandbox \"c90a1885bf08a3291aaafe6aee9866fdc251b8bfca20b23915850cf4de3ebfe7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 16:30:18.413071 containerd[1903]: time="2025-01-29T16:30:18.413021138Z" level=info msg="CreateContainer within sandbox \"c90a1885bf08a3291aaafe6aee9866fdc251b8bfca20b23915850cf4de3ebfe7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"adadade5ba87b9238218c136859c29e6ba92efb895ffd5e36cc93b95fdcfb75c\"" Jan 29 16:30:18.413872 containerd[1903]: time="2025-01-29T16:30:18.413839586Z" level=info msg="StartContainer for \"adadade5ba87b9238218c136859c29e6ba92efb895ffd5e36cc93b95fdcfb75c\"" Jan 29 16:30:18.455742 systemd[1]: Started cri-containerd-adadade5ba87b9238218c136859c29e6ba92efb895ffd5e36cc93b95fdcfb75c.scope - libcontainer container adadade5ba87b9238218c136859c29e6ba92efb895ffd5e36cc93b95fdcfb75c. Jan 29 16:30:18.461319 containerd[1903]: time="2025-01-29T16:30:18.460988094Z" level=info msg="CreateContainer within sandbox \"73cee41d02052ccdc771f6a30fd5fca5589249d232c45f4f9b1f41c1259d04f6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 16:30:18.492643 containerd[1903]: time="2025-01-29T16:30:18.492356310Z" level=info msg="CreateContainer within sandbox \"73cee41d02052ccdc771f6a30fd5fca5589249d232c45f4f9b1f41c1259d04f6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"07204a4c8e8658dc76f12e4b60469661d144abd20e9e76c5a2cc85b6917208d9\"" Jan 29 16:30:18.494982 containerd[1903]: time="2025-01-29T16:30:18.494888340Z" level=info msg="StartContainer for \"07204a4c8e8658dc76f12e4b60469661d144abd20e9e76c5a2cc85b6917208d9\"" Jan 29 16:30:18.504001 containerd[1903]: time="2025-01-29T16:30:18.503936734Z" level=info msg="StartContainer for \"adadade5ba87b9238218c136859c29e6ba92efb895ffd5e36cc93b95fdcfb75c\" returns successfully" Jan 29 16:30:18.562453 systemd[1]: Started cri-containerd-07204a4c8e8658dc76f12e4b60469661d144abd20e9e76c5a2cc85b6917208d9.scope - libcontainer container 07204a4c8e8658dc76f12e4b60469661d144abd20e9e76c5a2cc85b6917208d9. Jan 29 16:30:18.616434 containerd[1903]: time="2025-01-29T16:30:18.616391112Z" level=info msg="StartContainer for \"07204a4c8e8658dc76f12e4b60469661d144abd20e9e76c5a2cc85b6917208d9\" returns successfully" Jan 29 16:30:18.772470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount190057393.mount: Deactivated successfully. Jan 29 16:30:18.962405 kubelet[2394]: E0129 16:30:18.962350 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:19.525075 kubelet[2394]: I0129 16:30:19.524979 2394 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-75nbm" podStartSLOduration=6.524927798 podStartE2EDuration="6.524927798s" podCreationTimestamp="2025-01-29 16:30:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:30:19.508255944 +0000 UTC m=+71.524403100" watchObservedRunningTime="2025-01-29 16:30:19.524927798 +0000 UTC m=+71.541074954" Jan 29 16:30:19.525547 kubelet[2394]: I0129 16:30:19.525374 2394 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-ftm6v" podStartSLOduration=1.9854753139999999 podStartE2EDuration="6.525362449s" podCreationTimestamp="2025-01-29 16:30:13 +0000 UTC" firstStartedPulling="2025-01-29 16:30:13.853834099 +0000 UTC m=+65.869981245" lastFinishedPulling="2025-01-29 16:30:18.393721225 +0000 UTC m=+70.409868380" observedRunningTime="2025-01-29 16:30:19.52472837 +0000 UTC m=+71.540875526" watchObservedRunningTime="2025-01-29 16:30:19.525362449 +0000 UTC m=+71.541509608" Jan 29 16:30:19.963019 kubelet[2394]: E0129 16:30:19.962974 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:20.326584 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 29 16:30:20.964101 kubelet[2394]: E0129 16:30:20.964052 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:20.990368 systemd[1]: run-containerd-runc-k8s.io-07204a4c8e8658dc76f12e4b60469661d144abd20e9e76c5a2cc85b6917208d9-runc.2C04HE.mount: Deactivated successfully. Jan 29 16:30:21.964875 kubelet[2394]: E0129 16:30:21.964807 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:22.965544 kubelet[2394]: E0129 16:30:22.965409 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:23.966711 kubelet[2394]: E0129 16:30:23.966640 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:24.967884 kubelet[2394]: E0129 16:30:24.967830 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:25.463995 (udev-worker)[5047]: Network interface NamePolicy= disabled on kernel command line. Jan 29 16:30:25.470001 systemd-networkd[1817]: lxc_health: Link UP Jan 29 16:30:25.479284 (udev-worker)[5048]: Network interface NamePolicy= disabled on kernel command line. Jan 29 16:30:25.481037 systemd-networkd[1817]: lxc_health: Gained carrier Jan 29 16:30:25.970179 kubelet[2394]: E0129 16:30:25.968853 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:26.731333 systemd-networkd[1817]: lxc_health: Gained IPv6LL Jan 29 16:30:26.969313 kubelet[2394]: E0129 16:30:26.969258 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:27.970326 kubelet[2394]: E0129 16:30:27.970282 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:28.883803 kubelet[2394]: E0129 16:30:28.883750 2394 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:28.971118 kubelet[2394]: E0129 16:30:28.971071 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:29.161825 ntpd[1880]: Listen normally on 15 lxc_health [fe80::acc8:f7ff:fefe:ffb5%15]:123 Jan 29 16:30:29.162542 ntpd[1880]: 29 Jan 16:30:29 ntpd[1880]: Listen normally on 15 lxc_health [fe80::acc8:f7ff:fefe:ffb5%15]:123 Jan 29 16:30:29.190901 systemd[1]: run-containerd-runc-k8s.io-07204a4c8e8658dc76f12e4b60469661d144abd20e9e76c5a2cc85b6917208d9-runc.0KZgua.mount: Deactivated successfully. Jan 29 16:30:29.971787 kubelet[2394]: E0129 16:30:29.971715 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:30.971981 kubelet[2394]: E0129 16:30:30.971914 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:31.620857 systemd[1]: run-containerd-runc-k8s.io-07204a4c8e8658dc76f12e4b60469661d144abd20e9e76c5a2cc85b6917208d9-runc.6AxsVz.mount: Deactivated successfully. Jan 29 16:30:31.973108 kubelet[2394]: E0129 16:30:31.972697 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:32.974358 kubelet[2394]: E0129 16:30:32.974304 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:33.975556 kubelet[2394]: E0129 16:30:33.975384 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:34.976145 kubelet[2394]: E0129 16:30:34.976087 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:35.976787 kubelet[2394]: E0129 16:30:35.976730 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:36.977540 kubelet[2394]: E0129 16:30:36.977474 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:37.978030 kubelet[2394]: E0129 16:30:37.977973 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:38.978751 kubelet[2394]: E0129 16:30:38.978704 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:39.979836 kubelet[2394]: E0129 16:30:39.979780 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:40.980392 kubelet[2394]: E0129 16:30:40.980337 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:41.981448 kubelet[2394]: E0129 16:30:41.981392 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:42.981900 kubelet[2394]: E0129 16:30:42.981839 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:43.983169 kubelet[2394]: E0129 16:30:43.983097 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:44.983735 kubelet[2394]: E0129 16:30:44.983626 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:45.984278 kubelet[2394]: E0129 16:30:45.984219 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:46.985078 kubelet[2394]: E0129 16:30:46.985024 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:47.986326 kubelet[2394]: E0129 16:30:47.986182 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:48.883785 kubelet[2394]: E0129 16:30:48.883727 2394 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:48.986962 kubelet[2394]: E0129 16:30:48.986906 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:49.988113 kubelet[2394]: E0129 16:30:49.988054 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:50.433975 kubelet[2394]: E0129 16:30:50.433912 2394 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io 172.31.31.0)" Jan 29 16:30:50.664368 kubelet[2394]: E0129 16:30:50.664298 2394 kubelet_node_status.go:535] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-01-29T16:30:40Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-01-29T16:30:40Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-01-29T16:30:40Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-01-29T16:30:40Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"],\\\"sizeBytes\\\":166719855},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":91036984},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":71015439},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\\\",\\\"registry.k8s.io/kube-proxy:v1.31.5\\\"],\\\"sizeBytes\\\":30230147},{\\\"names\\\":[\\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"],\\\"sizeBytes\\\":18897442},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":311286}]}}\" for node \"172.31.31.0\": the server was unable to return a response in the time allotted, but may still be processing the request (patch nodes 172.31.31.0)" Jan 29 16:30:50.992181 kubelet[2394]: E0129 16:30:50.989897 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:51.991925 kubelet[2394]: E0129 16:30:51.991866 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:52.992268 kubelet[2394]: E0129 16:30:52.992210 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:53.993375 kubelet[2394]: E0129 16:30:53.993319 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:54.993517 kubelet[2394]: E0129 16:30:54.993453 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:55.994507 kubelet[2394]: E0129 16:30:55.994450 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:56.995044 kubelet[2394]: E0129 16:30:56.994989 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:57.996246 kubelet[2394]: E0129 16:30:57.996187 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:58.996875 kubelet[2394]: E0129 16:30:58.996817 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:30:59.998024 kubelet[2394]: E0129 16:30:59.997960 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:31:00.431395 kubelet[2394]: E0129 16:31:00.431329 2394 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io 172.31.31.0)" Jan 29 16:31:00.661479 kubelet[2394]: E0129 16:31:00.661418 2394 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.31.0\": the server was unable to return a response in the time allotted, but may still be processing the request (get nodes 172.31.31.0)" Jan 29 16:31:00.998114 kubelet[2394]: E0129 16:31:00.998063 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:31:01.998468 kubelet[2394]: E0129 16:31:01.998226 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:31:02.999302 kubelet[2394]: E0129 16:31:02.999242 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:31:03.999483 kubelet[2394]: E0129 16:31:03.999426 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:31:05.000020 kubelet[2394]: E0129 16:31:04.999966 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:31:06.000906 kubelet[2394]: E0129 16:31:06.000845 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:31:07.002525 kubelet[2394]: E0129 16:31:07.002466 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:31:08.003322 kubelet[2394]: E0129 16:31:08.003255 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:31:08.884340 kubelet[2394]: E0129 16:31:08.884281 2394 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:31:09.005006 kubelet[2394]: E0129 16:31:09.003833 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:31:09.007164 containerd[1903]: time="2025-01-29T16:31:09.007023828Z" level=info msg="StopPodSandbox for \"061962fca644e7ac2b370e599f489915fa52f33ecabd50788dd089ef8c79187a\"" Jan 29 16:31:09.007992 containerd[1903]: time="2025-01-29T16:31:09.007234627Z" level=info msg="TearDown network for sandbox \"061962fca644e7ac2b370e599f489915fa52f33ecabd50788dd089ef8c79187a\" successfully" Jan 29 16:31:09.007992 containerd[1903]: time="2025-01-29T16:31:09.007255480Z" level=info msg="StopPodSandbox for \"061962fca644e7ac2b370e599f489915fa52f33ecabd50788dd089ef8c79187a\" returns successfully" Jan 29 16:31:09.007992 containerd[1903]: time="2025-01-29T16:31:09.007979264Z" level=info msg="RemovePodSandbox for \"061962fca644e7ac2b370e599f489915fa52f33ecabd50788dd089ef8c79187a\"" Jan 29 16:31:09.008269 containerd[1903]: time="2025-01-29T16:31:09.008018684Z" level=info msg="Forcibly stopping sandbox \"061962fca644e7ac2b370e599f489915fa52f33ecabd50788dd089ef8c79187a\"" Jan 29 16:31:09.008269 containerd[1903]: time="2025-01-29T16:31:09.008214382Z" level=info msg="TearDown network for sandbox \"061962fca644e7ac2b370e599f489915fa52f33ecabd50788dd089ef8c79187a\" successfully" Jan 29 16:31:09.017434 containerd[1903]: time="2025-01-29T16:31:09.017334304Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"061962fca644e7ac2b370e599f489915fa52f33ecabd50788dd089ef8c79187a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:31:09.017596 containerd[1903]: time="2025-01-29T16:31:09.017471338Z" level=info msg="RemovePodSandbox \"061962fca644e7ac2b370e599f489915fa52f33ecabd50788dd089ef8c79187a\" returns successfully" Jan 29 16:31:10.004937 kubelet[2394]: E0129 16:31:10.004879 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:31:10.428294 kubelet[2394]: E0129 16:31:10.428229 2394 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io 172.31.31.0)" Jan 29 16:31:10.658433 kubelet[2394]: E0129 16:31:10.658391 2394 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.31.0\": the server was unable to return a response in the time allotted, but may still be processing the request (get nodes 172.31.31.0)" Jan 29 16:31:11.005167 kubelet[2394]: E0129 16:31:11.005102 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:31:12.005861 kubelet[2394]: E0129 16:31:12.005809 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:31:13.006169 kubelet[2394]: E0129 16:31:13.006101 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:31:14.006778 kubelet[2394]: E0129 16:31:14.006717 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:31:15.007593 kubelet[2394]: E0129 16:31:15.007532 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:31:16.009201 kubelet[2394]: E0129 16:31:16.009133 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:31:17.009771 kubelet[2394]: E0129 16:31:17.009714 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:31:18.010214 kubelet[2394]: E0129 16:31:18.010163 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:31:19.011193 kubelet[2394]: E0129 16:31:19.011128 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:31:20.011584 kubelet[2394]: E0129 16:31:20.011526 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:31:20.429260 kubelet[2394]: E0129 16:31:20.429217 2394 request.go:1255] Unexpected error when reading response body: net/http: request canceled (Client.Timeout or context cancellation while reading body) Jan 29 16:31:20.429471 kubelet[2394]: E0129 16:31:20.429297 2394 controller.go:195] "Failed to update lease" err="unexpected error when reading response body. Please retry. Original error: net/http: request canceled (Client.Timeout or context cancellation while reading body)" Jan 29 16:31:20.455973 kubelet[2394]: E0129 16:31:20.448955 2394 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.31.0?timeout=10s\": read tcp 172.31.31.0:33484->172.31.31.19:6443: read: connection reset by peer" Jan 29 16:31:20.455973 kubelet[2394]: I0129 16:31:20.449010 2394 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 29 16:31:21.011965 kubelet[2394]: E0129 16:31:21.011919 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:31:21.449444 kubelet[2394]: E0129 16:31:21.449379 2394 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.31.0\": Get \"https://172.31.31.19:6443/api/v1/nodes/172.31.31.0?timeout=10s\": context deadline exceeded - error from a previous attempt: read tcp 172.31.31.0:33484->172.31.31.19:6443: read: connection reset by peer" Jan 29 16:31:21.449840 kubelet[2394]: E0129 16:31:21.449791 2394 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.31.0\": Get \"https://172.31.31.19:6443/api/v1/nodes/172.31.31.0?timeout=10s\": dial tcp 172.31.31.19:6443: connect: connection refused" Jan 29 16:31:21.449840 kubelet[2394]: E0129 16:31:21.449803 2394 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC default/test-dynamic-volume-claim: failed to fetch PVC from API server: Get \"https://172.31.31.19:6443/api/v1/namespaces/default/persistentvolumeclaims/test-dynamic-volume-claim\": dial tcp 172.31.31.19:6443: connect: connection refused - error from a previous attempt: read tcp 172.31.31.0:33484->172.31.31.19:6443: read: connection reset by peer" pod="default/test-pod-1" volumeName="config" Jan 29 16:31:21.450017 kubelet[2394]: E0129 16:31:21.449816 2394 kubelet_node_status.go:522] "Unable to update node status" err="update node status exceeds retry count" Jan 29 16:31:21.469212 kubelet[2394]: E0129 16:31:21.469139 2394 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.31.0?timeout=10s\": dial tcp 172.31.31.19:6443: connect: connection refused - error from a previous attempt: write tcp 172.31.31.0:44838->172.31.31.19:6443: write: connection reset by peer" interval="200ms" Jan 29 16:31:22.012442 kubelet[2394]: E0129 16:31:22.012383 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:31:23.013122 kubelet[2394]: E0129 16:31:23.012888 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:31:24.013838 kubelet[2394]: E0129 16:31:24.013780 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:31:25.014853 kubelet[2394]: E0129 16:31:25.014795 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:31:26.015545 kubelet[2394]: E0129 16:31:26.015488 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:31:27.016539 kubelet[2394]: E0129 16:31:27.016408 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:31:28.017199 kubelet[2394]: E0129 16:31:28.017135 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:31:28.884643 kubelet[2394]: E0129 16:31:28.884595 2394 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:31:29.018109 kubelet[2394]: E0129 16:31:29.018066 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:31:30.018391 kubelet[2394]: E0129 16:31:30.018333 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:31:31.019037 kubelet[2394]: E0129 16:31:31.018983 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:31:31.670557 kubelet[2394]: E0129 16:31:31.670499 2394 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.31.0?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="400ms" Jan 29 16:31:32.020470 kubelet[2394]: E0129 16:31:32.020090 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:31:33.020904 kubelet[2394]: E0129 16:31:33.020844 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:31:34.021628 kubelet[2394]: E0129 16:31:34.021569 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:31:35.022018 kubelet[2394]: E0129 16:31:35.021962 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:31:36.022965 kubelet[2394]: E0129 16:31:36.022915 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:31:37.023521 kubelet[2394]: E0129 16:31:37.023462 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:31:38.024248 kubelet[2394]: E0129 16:31:38.024194 2394 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"