Jan 29 11:58:10.013491 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 29 11:58:10.013533 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 11:58:10.013550 kernel: BIOS-provided physical RAM map: Jan 29 11:58:10.014869 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 11:58:10.014892 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 11:58:10.014903 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 11:58:10.014922 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jan 29 11:58:10.014933 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jan 29 11:58:10.014943 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jan 29 11:58:10.014953 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 11:58:10.014964 kernel: NX (Execute Disable) protection: active Jan 29 11:58:10.014974 kernel: APIC: Static calls initialized Jan 29 11:58:10.014983 kernel: SMBIOS 2.7 present. Jan 29 11:58:10.014995 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 29 11:58:10.016440 kernel: Hypervisor detected: KVM Jan 29 11:58:10.016467 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 11:58:10.016481 kernel: kvm-clock: using sched offset of 5761058084 cycles Jan 29 11:58:10.016496 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 11:58:10.016510 kernel: tsc: Detected 2499.996 MHz processor Jan 29 11:58:10.016523 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 11:58:10.016537 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 11:58:10.016556 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jan 29 11:58:10.016569 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 11:58:10.016582 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 11:58:10.016594 kernel: Using GB pages for direct mapping Jan 29 11:58:10.016606 kernel: ACPI: Early table checksum verification disabled Jan 29 11:58:10.016618 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jan 29 11:58:10.016631 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jan 29 11:58:10.016643 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 29 11:58:10.016655 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 29 11:58:10.016671 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jan 29 11:58:10.016683 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 29 11:58:10.016695 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 29 11:58:10.016708 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 29 11:58:10.016721 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 29 11:58:10.016733 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 29 11:58:10.016745 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 29 11:58:10.016758 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 29 11:58:10.016770 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jan 29 11:58:10.016786 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jan 29 11:58:10.016804 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jan 29 11:58:10.016818 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jan 29 11:58:10.016831 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jan 29 11:58:10.016845 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jan 29 11:58:10.016861 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jan 29 11:58:10.016875 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jan 29 11:58:10.016888 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jan 29 11:58:10.016901 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jan 29 11:58:10.016914 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 29 11:58:10.016928 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 29 11:58:10.016942 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 29 11:58:10.016956 kernel: NUMA: Initialized distance table, cnt=1 Jan 29 11:58:10.016970 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jan 29 11:58:10.016987 kernel: Zone ranges: Jan 29 11:58:10.017193 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 11:58:10.017207 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jan 29 11:58:10.017221 kernel: Normal empty Jan 29 11:58:10.017235 kernel: Movable zone start for each node Jan 29 11:58:10.017248 kernel: Early memory node ranges Jan 29 11:58:10.017261 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 11:58:10.017274 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jan 29 11:58:10.017288 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jan 29 11:58:10.017306 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:58:10.018391 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 11:58:10.018407 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jan 29 11:58:10.018421 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 29 11:58:10.018515 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 11:58:10.018533 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 29 11:58:10.018548 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 11:58:10.018562 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 11:58:10.018575 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 11:58:10.018589 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 11:58:10.018608 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 11:58:10.018621 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 11:58:10.018635 kernel: TSC deadline timer available Jan 29 11:58:10.018648 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 11:58:10.018663 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 11:58:10.018676 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 29 11:58:10.018690 kernel: Booting paravirtualized kernel on KVM Jan 29 11:58:10.018703 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 11:58:10.018717 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 11:58:10.018734 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 11:58:10.018748 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 11:58:10.018761 kernel: pcpu-alloc: [0] 0 1 Jan 29 11:58:10.018774 kernel: kvm-guest: PV spinlocks enabled Jan 29 11:58:10.018788 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 11:58:10.018804 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 11:58:10.018819 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:58:10.018832 kernel: random: crng init done Jan 29 11:58:10.018849 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:58:10.018863 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 11:58:10.018877 kernel: Fallback order for Node 0: 0 Jan 29 11:58:10.018890 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jan 29 11:58:10.018904 kernel: Policy zone: DMA32 Jan 29 11:58:10.018918 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:58:10.018933 kernel: Memory: 1932344K/2057760K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 125156K reserved, 0K cma-reserved) Jan 29 11:58:10.018947 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 11:58:10.018961 kernel: Kernel/User page tables isolation: enabled Jan 29 11:58:10.018979 kernel: ftrace: allocating 37921 entries in 149 pages Jan 29 11:58:10.018993 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 11:58:10.019007 kernel: Dynamic Preempt: voluntary Jan 29 11:58:10.019021 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:58:10.019037 kernel: rcu: RCU event tracing is enabled. Jan 29 11:58:10.019051 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 11:58:10.019065 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:58:10.019078 kernel: Rude variant of Tasks RCU enabled. Jan 29 11:58:10.019091 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:58:10.019110 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:58:10.019125 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 11:58:10.019141 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 29 11:58:10.019158 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:58:10.019173 kernel: Console: colour VGA+ 80x25 Jan 29 11:58:10.019189 kernel: printk: console [ttyS0] enabled Jan 29 11:58:10.019205 kernel: ACPI: Core revision 20230628 Jan 29 11:58:10.019221 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 29 11:58:10.019237 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 11:58:10.019257 kernel: x2apic enabled Jan 29 11:58:10.019273 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 11:58:10.019303 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 29 11:58:10.019335 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Jan 29 11:58:10.019355 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 29 11:58:10.019370 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 29 11:58:10.019386 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 11:58:10.019400 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 11:58:10.019415 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 11:58:10.019430 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 11:58:10.019446 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 29 11:58:10.019462 kernel: RETBleed: Vulnerable Jan 29 11:58:10.019477 kernel: Speculative Store Bypass: Vulnerable Jan 29 11:58:10.019497 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 11:58:10.019512 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 11:58:10.019528 kernel: GDS: Unknown: Dependent on hypervisor status Jan 29 11:58:10.019654 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 11:58:10.019673 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 11:58:10.019690 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 11:58:10.019710 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 29 11:58:10.019725 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 29 11:58:10.019741 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 29 11:58:10.019756 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 29 11:58:10.019773 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 29 11:58:10.019789 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 29 11:58:10.019805 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 11:58:10.019821 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 29 11:58:10.019837 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 29 11:58:10.019852 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 29 11:58:10.019868 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 29 11:58:10.019886 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 29 11:58:10.019902 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 29 11:58:10.019918 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 29 11:58:10.019934 kernel: Freeing SMP alternatives memory: 32K Jan 29 11:58:10.019951 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:58:10.019966 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:58:10.019982 kernel: landlock: Up and running. Jan 29 11:58:10.019998 kernel: SELinux: Initializing. Jan 29 11:58:10.020017 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 11:58:10.020033 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 11:58:10.020049 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 29 11:58:10.020068 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:58:10.020084 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:58:10.020100 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:58:10.020117 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 29 11:58:10.020133 kernel: signal: max sigframe size: 3632 Jan 29 11:58:10.020149 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:58:10.020166 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:58:10.020182 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 29 11:58:10.020198 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:58:10.020217 kernel: smpboot: x86: Booting SMP configuration: Jan 29 11:58:10.020234 kernel: .... node #0, CPUs: #1 Jan 29 11:58:10.020251 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 29 11:58:10.020268 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 29 11:58:10.020284 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 11:58:10.020301 kernel: smpboot: Max logical packages: 1 Jan 29 11:58:10.023357 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Jan 29 11:58:10.023384 kernel: devtmpfs: initialized Jan 29 11:58:10.023400 kernel: x86/mm: Memory block size: 128MB Jan 29 11:58:10.023423 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:58:10.023439 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 11:58:10.023456 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:58:10.023472 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:58:10.023487 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:58:10.023503 kernel: audit: type=2000 audit(1738151888.750:1): state=initialized audit_enabled=0 res=1 Jan 29 11:58:10.023519 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:58:10.023534 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 11:58:10.023554 kernel: cpuidle: using governor menu Jan 29 11:58:10.023570 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:58:10.023586 kernel: dca service started, version 1.12.1 Jan 29 11:58:10.023602 kernel: PCI: Using configuration type 1 for base access Jan 29 11:58:10.023619 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 11:58:10.023635 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:58:10.023651 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:58:10.023666 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:58:10.023682 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:58:10.023701 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:58:10.023717 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:58:10.023733 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:58:10.023749 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:58:10.023765 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 29 11:58:10.023781 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 11:58:10.023797 kernel: ACPI: Interpreter enabled Jan 29 11:58:10.023813 kernel: ACPI: PM: (supports S0 S5) Jan 29 11:58:10.023829 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 11:58:10.023845 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 11:58:10.023864 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 11:58:10.023880 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 29 11:58:10.023896 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:58:10.024147 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:58:10.024439 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 29 11:58:10.024634 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 29 11:58:10.024657 kernel: acpiphp: Slot [3] registered Jan 29 11:58:10.024711 kernel: acpiphp: Slot [4] registered Jan 29 11:58:10.024728 kernel: acpiphp: Slot [5] registered Jan 29 11:58:10.024744 kernel: acpiphp: Slot [6] registered Jan 29 11:58:10.024759 kernel: acpiphp: Slot [7] registered Jan 29 11:58:10.024805 kernel: acpiphp: Slot [8] registered Jan 29 11:58:10.024821 kernel: acpiphp: Slot [9] registered Jan 29 11:58:10.024837 kernel: acpiphp: Slot [10] registered Jan 29 11:58:10.024881 kernel: acpiphp: Slot [11] registered Jan 29 11:58:10.024898 kernel: acpiphp: Slot [12] registered Jan 29 11:58:10.024917 kernel: acpiphp: Slot [13] registered Jan 29 11:58:10.024933 kernel: acpiphp: Slot [14] registered Jan 29 11:58:10.024977 kernel: acpiphp: Slot [15] registered Jan 29 11:58:10.024993 kernel: acpiphp: Slot [16] registered Jan 29 11:58:10.025009 kernel: acpiphp: Slot [17] registered Jan 29 11:58:10.025050 kernel: acpiphp: Slot [18] registered Jan 29 11:58:10.025067 kernel: acpiphp: Slot [19] registered Jan 29 11:58:10.025083 kernel: acpiphp: Slot [20] registered Jan 29 11:58:10.025099 kernel: acpiphp: Slot [21] registered Jan 29 11:58:10.025142 kernel: acpiphp: Slot [22] registered Jan 29 11:58:10.025162 kernel: acpiphp: Slot [23] registered Jan 29 11:58:10.025178 kernel: acpiphp: Slot [24] registered Jan 29 11:58:10.025194 kernel: acpiphp: Slot [25] registered Jan 29 11:58:10.025238 kernel: acpiphp: Slot [26] registered Jan 29 11:58:10.025254 kernel: acpiphp: Slot [27] registered Jan 29 11:58:10.025270 kernel: acpiphp: Slot [28] registered Jan 29 11:58:10.028437 kernel: acpiphp: Slot [29] registered Jan 29 11:58:10.029536 kernel: acpiphp: Slot [30] registered Jan 29 11:58:10.029641 kernel: acpiphp: Slot [31] registered Jan 29 11:58:10.029717 kernel: PCI host bridge to bus 0000:00 Jan 29 11:58:10.029921 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 11:58:10.030156 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 11:58:10.030630 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 11:58:10.030771 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 29 11:58:10.030893 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:58:10.031049 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 29 11:58:10.031202 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 29 11:58:10.031383 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 29 11:58:10.031761 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 29 11:58:10.031911 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jan 29 11:58:10.032039 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 29 11:58:10.032167 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 29 11:58:10.032290 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 29 11:58:10.032729 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 29 11:58:10.032882 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 29 11:58:10.033030 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 29 11:58:10.033433 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 29 11:58:10.033585 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jan 29 11:58:10.035465 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 29 11:58:10.035628 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 11:58:10.035783 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 29 11:58:10.035920 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jan 29 11:58:10.036062 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 29 11:58:10.036197 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jan 29 11:58:10.037354 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 11:58:10.037392 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 11:58:10.037415 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 11:58:10.037450 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 11:58:10.037466 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 29 11:58:10.037482 kernel: iommu: Default domain type: Translated Jan 29 11:58:10.037685 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 11:58:10.037704 kernel: PCI: Using ACPI for IRQ routing Jan 29 11:58:10.037721 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 11:58:10.037738 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 11:58:10.037755 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jan 29 11:58:10.037933 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 29 11:58:10.038083 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 29 11:58:10.038389 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 11:58:10.038416 kernel: vgaarb: loaded Jan 29 11:58:10.038434 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 29 11:58:10.038451 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 29 11:58:10.038468 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 11:58:10.038484 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:58:10.038502 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:58:10.038525 kernel: pnp: PnP ACPI init Jan 29 11:58:10.038542 kernel: pnp: PnP ACPI: found 5 devices Jan 29 11:58:10.038558 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 11:58:10.038575 kernel: NET: Registered PF_INET protocol family Jan 29 11:58:10.038591 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:58:10.038608 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 29 11:58:10.038626 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:58:10.038643 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 11:58:10.038671 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 29 11:58:10.038690 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 29 11:58:10.038707 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 11:58:10.038723 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 11:58:10.038739 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:58:10.038755 kernel: NET: Registered PF_XDP protocol family Jan 29 11:58:10.038975 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 11:58:10.039107 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 11:58:10.039228 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 11:58:10.042417 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 29 11:58:10.042587 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 29 11:58:10.042610 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:58:10.042628 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 29 11:58:10.042644 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 29 11:58:10.042661 kernel: clocksource: Switched to clocksource tsc Jan 29 11:58:10.042677 kernel: Initialise system trusted keyrings Jan 29 11:58:10.042693 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 29 11:58:10.042716 kernel: Key type asymmetric registered Jan 29 11:58:10.042732 kernel: Asymmetric key parser 'x509' registered Jan 29 11:58:10.042748 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 11:58:10.042765 kernel: io scheduler mq-deadline registered Jan 29 11:58:10.042781 kernel: io scheduler kyber registered Jan 29 11:58:10.042797 kernel: io scheduler bfq registered Jan 29 11:58:10.042813 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 11:58:10.042829 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:58:10.042846 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 11:58:10.042865 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 11:58:10.042882 kernel: i8042: Warning: Keylock active Jan 29 11:58:10.042898 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 11:58:10.042915 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 11:58:10.043061 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 29 11:58:10.043188 kernel: rtc_cmos 00:00: registered as rtc0 Jan 29 11:58:10.043348 kernel: rtc_cmos 00:00: setting system clock to 2025-01-29T11:58:09 UTC (1738151889) Jan 29 11:58:10.043473 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 29 11:58:10.043496 kernel: intel_pstate: CPU model not supported Jan 29 11:58:10.043513 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:58:10.043529 kernel: Segment Routing with IPv6 Jan 29 11:58:10.043546 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:58:10.043562 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:58:10.043578 kernel: Key type dns_resolver registered Jan 29 11:58:10.043594 kernel: IPI shorthand broadcast: enabled Jan 29 11:58:10.043611 kernel: sched_clock: Marking stable (469002412, 240820951)->(785436402, -75613039) Jan 29 11:58:10.043627 kernel: registered taskstats version 1 Jan 29 11:58:10.043647 kernel: Loading compiled-in X.509 certificates Jan 29 11:58:10.043663 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 29 11:58:10.043680 kernel: Key type .fscrypt registered Jan 29 11:58:10.043696 kernel: Key type fscrypt-provisioning registered Jan 29 11:58:10.043712 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:58:10.043729 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:58:10.043745 kernel: ima: No architecture policies found Jan 29 11:58:10.043761 kernel: clk: Disabling unused clocks Jan 29 11:58:10.043777 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 29 11:58:10.043797 kernel: Write protecting the kernel read-only data: 36864k Jan 29 11:58:10.043813 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 29 11:58:10.043829 kernel: Run /init as init process Jan 29 11:58:10.043844 kernel: with arguments: Jan 29 11:58:10.043861 kernel: /init Jan 29 11:58:10.043954 kernel: with environment: Jan 29 11:58:10.043971 kernel: HOME=/ Jan 29 11:58:10.043987 kernel: TERM=linux Jan 29 11:58:10.044003 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:58:10.044031 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:58:10.044133 systemd[1]: Detected virtualization amazon. Jan 29 11:58:10.044157 systemd[1]: Detected architecture x86-64. Jan 29 11:58:10.044175 systemd[1]: Running in initrd. Jan 29 11:58:10.044192 systemd[1]: No hostname configured, using default hostname. Jan 29 11:58:10.044213 systemd[1]: Hostname set to <localhost>. Jan 29 11:58:10.044231 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:58:10.044250 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:58:10.044268 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:58:10.044286 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:58:10.044305 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:58:10.045254 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:58:10.045272 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:58:10.045298 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:58:10.045436 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:58:10.045458 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:58:10.045480 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:58:10.045501 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:58:10.045521 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:58:10.045542 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:58:10.045567 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:58:10.045588 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:58:10.045609 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:58:10.045630 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:58:10.045647 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:58:10.045668 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:58:10.045689 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:58:10.045708 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:58:10.045728 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:58:10.045744 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:58:10.045760 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:58:10.045776 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:58:10.045791 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:58:10.045807 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:58:10.045823 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:58:10.045842 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:58:10.045857 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:58:10.045874 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:58:10.045891 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:58:10.045909 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:58:10.045932 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 11:58:10.045955 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:58:10.046006 systemd-journald[178]: Collecting audit messages is disabled. Jan 29 11:58:10.046047 systemd-journald[178]: Journal started Jan 29 11:58:10.046081 systemd-journald[178]: Runtime Journal (/run/log/journal/ec2a044933fef621000f29ea9b65b58d) is 4.8M, max 38.6M, 33.7M free. Jan 29 11:58:10.008393 systemd-modules-load[179]: Inserted module 'overlay' Jan 29 11:58:10.057378 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:58:10.071344 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:58:10.082504 kernel: Bridge firewalling registered Jan 29 11:58:10.082530 systemd-modules-load[179]: Inserted module 'br_netfilter' Jan 29 11:58:10.234616 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:58:10.236493 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:58:10.243578 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:58:10.252642 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:58:10.263653 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:58:10.271518 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:58:10.283368 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:58:10.299440 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:58:10.312147 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:58:10.330568 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:58:10.332421 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:58:10.335796 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:58:10.349728 dracut-cmdline[211]: dracut-dracut-053 Jan 29 11:58:10.351032 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:58:10.358350 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 11:58:10.414509 systemd-resolved[218]: Positive Trust Anchors: Jan 29 11:58:10.414529 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:58:10.414586 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:58:10.430794 systemd-resolved[218]: Defaulting to hostname 'linux'. Jan 29 11:58:10.433509 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:58:10.437111 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:58:10.458343 kernel: SCSI subsystem initialized Jan 29 11:58:10.469351 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:58:10.480347 kernel: iscsi: registered transport (tcp) Jan 29 11:58:10.503632 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:58:10.503713 kernel: QLogic iSCSI HBA Driver Jan 29 11:58:10.548181 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:58:10.557433 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:58:10.594452 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:58:10.594528 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:58:10.594543 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:58:10.638354 kernel: raid6: avx512x4 gen() 11427 MB/s Jan 29 11:58:10.655526 kernel: raid6: avx512x2 gen() 10672 MB/s Jan 29 11:58:10.673348 kernel: raid6: avx512x1 gen() 8660 MB/s Jan 29 11:58:10.691338 kernel: raid6: avx2x4 gen() 5642 MB/s Jan 29 11:58:10.708348 kernel: raid6: avx2x2 gen() 12918 MB/s Jan 29 11:58:10.725365 kernel: raid6: avx2x1 gen() 10059 MB/s Jan 29 11:58:10.725442 kernel: raid6: using algorithm avx2x2 gen() 12918 MB/s Jan 29 11:58:10.743361 kernel: raid6: .... xor() 17120 MB/s, rmw enabled Jan 29 11:58:10.743441 kernel: raid6: using avx512x2 recovery algorithm Jan 29 11:58:10.772359 kernel: xor: automatically using best checksumming function avx Jan 29 11:58:10.979346 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:58:10.989821 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:58:10.996524 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:58:11.029242 systemd-udevd[397]: Using default interface naming scheme 'v255'. Jan 29 11:58:11.040679 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:58:11.054881 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:58:11.083561 dracut-pre-trigger[399]: rd.md=0: removing MD RAID activation Jan 29 11:58:11.126248 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:58:11.131505 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:58:11.211358 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:58:11.224537 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:58:11.251926 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:58:11.261959 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:58:11.278079 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:58:11.286652 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:58:11.295492 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:58:11.350034 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:58:11.366333 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 11:58:11.371499 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 29 11:58:11.395765 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 29 11:58:11.395971 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 29 11:58:11.396134 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:e3:49:ed:40:9b Jan 29 11:58:11.400849 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 11:58:11.400909 kernel: AES CTR mode by8 optimization enabled Jan 29 11:58:11.400679 (udev-worker)[451]: Network interface NamePolicy= disabled on kernel command line. Jan 29 11:58:11.432640 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:58:11.433077 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:58:11.441511 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:58:11.443391 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:58:11.443734 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:58:11.452803 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:58:11.460981 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:58:11.478388 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 29 11:58:11.480704 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 29 11:58:11.489342 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 29 11:58:11.495506 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:58:11.495567 kernel: GPT:9289727 != 16777215 Jan 29 11:58:11.495586 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:58:11.495604 kernel: GPT:9289727 != 16777215 Jan 29 11:58:11.495622 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:58:11.495641 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 11:58:11.608337 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (450) Jan 29 11:58:11.618361 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (443) Jan 29 11:58:11.671460 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:58:11.685649 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:58:11.724713 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:58:11.754568 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 29 11:58:11.765632 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 29 11:58:11.772213 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 29 11:58:11.772360 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 29 11:58:11.784343 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 29 11:58:11.796999 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:58:11.810808 disk-uuid[627]: Primary Header is updated. Jan 29 11:58:11.810808 disk-uuid[627]: Secondary Entries is updated. Jan 29 11:58:11.810808 disk-uuid[627]: Secondary Header is updated. Jan 29 11:58:11.819341 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 11:58:11.824431 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 11:58:11.829410 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 11:58:12.837347 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 11:58:12.837903 disk-uuid[628]: The operation has completed successfully. Jan 29 11:58:12.989003 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:58:12.989126 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:58:13.018557 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:58:13.047164 sh[973]: Success Jan 29 11:58:13.066393 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 29 11:58:13.180513 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:58:13.194517 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:58:13.197386 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:58:13.238996 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 29 11:58:13.239060 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:58:13.239088 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:58:13.240036 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:58:13.241482 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:58:13.342348 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 11:58:13.355679 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:58:13.356710 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:58:13.370559 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:58:13.377808 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:58:13.408910 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:58:13.408979 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:58:13.409001 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 11:58:13.415353 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 11:58:13.431301 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:58:13.433041 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:58:13.442430 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:58:13.451901 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:58:13.504989 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:58:13.514507 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:58:13.543344 systemd-networkd[1166]: lo: Link UP Jan 29 11:58:13.543355 systemd-networkd[1166]: lo: Gained carrier Jan 29 11:58:13.545196 systemd-networkd[1166]: Enumeration completed Jan 29 11:58:13.545401 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:58:13.546206 systemd-networkd[1166]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:58:13.546212 systemd-networkd[1166]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:58:13.547172 systemd[1]: Reached target network.target - Network. Jan 29 11:58:13.555511 systemd-networkd[1166]: eth0: Link UP Jan 29 11:58:13.555516 systemd-networkd[1166]: eth0: Gained carrier Jan 29 11:58:13.555532 systemd-networkd[1166]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:58:13.579454 systemd-networkd[1166]: eth0: DHCPv4 address 172.31.24.218/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 29 11:58:13.768520 ignition[1105]: Ignition 2.19.0 Jan 29 11:58:13.768534 ignition[1105]: Stage: fetch-offline Jan 29 11:58:13.768734 ignition[1105]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:58:13.768742 ignition[1105]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 11:58:13.768971 ignition[1105]: Ignition finished successfully Jan 29 11:58:13.776025 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:58:13.783707 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 11:58:13.808598 ignition[1176]: Ignition 2.19.0 Jan 29 11:58:13.808617 ignition[1176]: Stage: fetch Jan 29 11:58:13.809415 ignition[1176]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:58:13.809429 ignition[1176]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 11:58:13.809540 ignition[1176]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 11:58:13.863440 ignition[1176]: PUT result: OK Jan 29 11:58:13.867592 ignition[1176]: parsed url from cmdline: "" Jan 29 11:58:13.867605 ignition[1176]: no config URL provided Jan 29 11:58:13.867616 ignition[1176]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:58:13.867631 ignition[1176]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:58:13.867654 ignition[1176]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 11:58:13.870634 ignition[1176]: PUT result: OK Jan 29 11:58:13.871681 ignition[1176]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 29 11:58:13.875376 ignition[1176]: GET result: OK Jan 29 11:58:13.875457 ignition[1176]: parsing config with SHA512: c4980e359f020ef02a7f6882acdc36988413eaf9baba6de8c002f5028727ba80fe86786acf7c743ed4444eaa3d02bcfb10b6fc2d53592e8577b08e283ada7c2a Jan 29 11:58:13.884023 unknown[1176]: fetched base config from "system" Jan 29 11:58:13.884040 unknown[1176]: fetched base config from "system" Jan 29 11:58:13.884697 ignition[1176]: fetch: fetch complete Jan 29 11:58:13.884048 unknown[1176]: fetched user config from "aws" Jan 29 11:58:13.884704 ignition[1176]: fetch: fetch passed Jan 29 11:58:13.884759 ignition[1176]: Ignition finished successfully Jan 29 11:58:13.889042 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 11:58:13.900530 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:58:13.936267 ignition[1183]: Ignition 2.19.0 Jan 29 11:58:13.936283 ignition[1183]: Stage: kargs Jan 29 11:58:13.937018 ignition[1183]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:58:13.937033 ignition[1183]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 11:58:13.937144 ignition[1183]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 11:58:13.939025 ignition[1183]: PUT result: OK Jan 29 11:58:13.952873 ignition[1183]: kargs: kargs passed Jan 29 11:58:13.952962 ignition[1183]: Ignition finished successfully Jan 29 11:58:13.959028 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:58:13.973970 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:58:14.003990 ignition[1189]: Ignition 2.19.0 Jan 29 11:58:14.004004 ignition[1189]: Stage: disks Jan 29 11:58:14.004617 ignition[1189]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:58:14.004627 ignition[1189]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 11:58:14.004733 ignition[1189]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 11:58:14.007001 ignition[1189]: PUT result: OK Jan 29 11:58:14.016908 ignition[1189]: disks: disks passed Jan 29 11:58:14.016982 ignition[1189]: Ignition finished successfully Jan 29 11:58:14.020101 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:58:14.022621 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:58:14.030871 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:58:14.036447 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:58:14.038106 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:58:14.039208 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:58:14.052881 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:58:14.120141 systemd-fsck[1197]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 11:58:14.124650 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:58:14.137539 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:58:14.317419 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 29 11:58:14.318439 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:58:14.319181 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:58:14.337502 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:58:14.349797 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:58:14.355786 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 11:58:14.355846 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:58:14.355878 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:58:14.368218 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:58:14.380601 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:58:14.394337 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1216) Jan 29 11:58:14.398470 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:58:14.398537 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:58:14.398555 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 11:58:14.412342 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 11:58:14.415836 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:58:14.840795 initrd-setup-root[1240]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:58:14.874212 initrd-setup-root[1247]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:58:14.918417 initrd-setup-root[1254]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:58:14.933462 systemd-networkd[1166]: eth0: Gained IPv6LL Jan 29 11:58:14.943902 initrd-setup-root[1261]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:58:15.266737 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:58:15.273611 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:58:15.278669 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:58:15.288236 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:58:15.289431 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:58:15.322520 ignition[1328]: INFO : Ignition 2.19.0 Jan 29 11:58:15.323956 ignition[1328]: INFO : Stage: mount Jan 29 11:58:15.323956 ignition[1328]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:58:15.323956 ignition[1328]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 11:58:15.323956 ignition[1328]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 11:58:15.328772 ignition[1328]: INFO : PUT result: OK Jan 29 11:58:15.331550 ignition[1328]: INFO : mount: mount passed Jan 29 11:58:15.331550 ignition[1328]: INFO : Ignition finished successfully Jan 29 11:58:15.334008 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:58:15.342605 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:58:15.345740 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:58:15.364546 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:58:15.397411 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1340) Jan 29 11:58:15.397481 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:58:15.397500 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:58:15.399661 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 11:58:15.408469 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 11:58:15.410672 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:58:15.460762 ignition[1357]: INFO : Ignition 2.19.0 Jan 29 11:58:15.460762 ignition[1357]: INFO : Stage: files Jan 29 11:58:15.463933 ignition[1357]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:58:15.463933 ignition[1357]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 11:58:15.463933 ignition[1357]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 11:58:15.463933 ignition[1357]: INFO : PUT result: OK Jan 29 11:58:15.471200 ignition[1357]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:58:15.495527 ignition[1357]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:58:15.495527 ignition[1357]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:58:15.553420 ignition[1357]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:58:15.555101 ignition[1357]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:58:15.560552 unknown[1357]: wrote ssh authorized keys file for user: core Jan 29 11:58:15.562356 ignition[1357]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:58:15.566954 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 11:58:15.568879 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 11:58:15.577008 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:58:15.577008 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 11:58:15.692647 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 11:58:15.829971 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:58:15.829971 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 11:58:15.839545 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 29 11:58:16.307431 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 29 11:58:16.430354 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 11:58:16.433158 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:58:16.433158 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:58:16.433158 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:58:16.433158 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:58:16.433158 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:58:16.433158 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:58:16.433158 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:58:16.433158 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:58:16.433158 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:58:16.433158 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:58:16.433158 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:58:16.433158 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:58:16.433158 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:58:16.433158 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 29 11:58:16.721761 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 29 11:58:17.306953 ignition[1357]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:58:17.306953 ignition[1357]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 29 11:58:17.313608 ignition[1357]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 11:58:17.316456 ignition[1357]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 11:58:17.316456 ignition[1357]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 29 11:58:17.316456 ignition[1357]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 29 11:58:17.316456 ignition[1357]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:58:17.316456 ignition[1357]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:58:17.316456 ignition[1357]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 29 11:58:17.316456 ignition[1357]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:58:17.316456 ignition[1357]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:58:17.316456 ignition[1357]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:58:17.316456 ignition[1357]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:58:17.316456 ignition[1357]: INFO : files: files passed Jan 29 11:58:17.316456 ignition[1357]: INFO : Ignition finished successfully Jan 29 11:58:17.317675 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:58:17.356803 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:58:17.369801 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:58:17.372001 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:58:17.372132 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:58:17.400255 initrd-setup-root-after-ignition[1386]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:58:17.400255 initrd-setup-root-after-ignition[1386]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:58:17.404864 initrd-setup-root-after-ignition[1390]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:58:17.410056 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:58:17.410681 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:58:17.427659 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:58:17.517548 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:58:17.517674 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:58:17.520237 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:58:17.523709 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:58:17.526036 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:58:17.533556 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:58:17.565158 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:58:17.572936 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:58:17.608807 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:58:17.609489 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:58:17.619545 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:58:17.622449 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:58:17.623570 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:58:17.626942 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:58:17.628343 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:58:17.631807 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:58:17.631947 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:58:17.641601 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:58:17.645322 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:58:17.646769 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:58:17.649028 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:58:17.651348 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:58:17.653432 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:58:17.659079 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:58:17.659370 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:58:17.663764 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:58:17.665412 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:58:17.669177 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:58:17.670071 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:58:17.673057 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:58:17.673273 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:58:17.680022 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:58:17.681405 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:58:17.684739 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:58:17.685847 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:58:17.695663 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:58:17.697871 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:58:17.698115 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:58:17.702889 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:58:17.706379 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:58:17.706632 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:58:17.712492 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:58:17.712638 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:58:17.724170 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:58:17.724281 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:58:17.746388 ignition[1410]: INFO : Ignition 2.19.0 Jan 29 11:58:17.746388 ignition[1410]: INFO : Stage: umount Jan 29 11:58:17.749537 ignition[1410]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:58:17.749537 ignition[1410]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 11:58:17.749537 ignition[1410]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 11:58:17.748857 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:58:17.759483 ignition[1410]: INFO : PUT result: OK Jan 29 11:58:17.753734 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:58:17.754512 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:58:17.765946 ignition[1410]: INFO : umount: umount passed Jan 29 11:58:17.767571 ignition[1410]: INFO : Ignition finished successfully Jan 29 11:58:17.769557 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:58:17.769703 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:58:17.773045 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:58:17.773159 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:58:17.774367 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:58:17.774731 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:58:17.776935 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 11:58:17.779216 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 11:58:17.781450 systemd[1]: Stopped target network.target - Network. Jan 29 11:58:17.781556 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:58:17.781626 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:58:17.784858 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:58:17.788380 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:58:17.790448 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:58:17.791768 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:58:17.794204 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:58:17.795265 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:58:17.795324 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:58:17.797245 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:58:17.797286 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:58:17.798583 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:58:17.798637 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:58:17.804617 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:58:17.804724 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:58:17.806658 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:58:17.806710 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:58:17.809285 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:58:17.813484 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:58:17.817422 systemd-networkd[1166]: eth0: DHCPv6 lease lost Jan 29 11:58:17.818797 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:58:17.818917 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:58:17.821074 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:58:17.821167 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:58:17.830280 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:58:17.830523 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:58:17.846872 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:58:17.849592 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:58:17.849686 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:58:17.857226 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:58:17.857306 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:58:17.864440 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:58:17.864613 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:58:17.872242 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:58:17.872346 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:58:17.881977 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:58:17.924813 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:58:17.925032 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:58:17.933931 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:58:17.934082 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:58:17.940207 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:58:17.940267 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:58:17.948721 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:58:17.950445 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:58:17.954529 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:58:17.954612 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:58:17.958681 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:58:17.958762 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:58:17.983695 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:58:17.985830 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:58:17.985939 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:58:17.988419 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:58:17.988498 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:58:17.991988 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:58:17.992252 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:58:18.015535 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:58:18.015677 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:58:18.018431 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:58:18.031153 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:58:18.099876 systemd[1]: Switching root. Jan 29 11:58:18.148583 systemd-journald[178]: Journal stopped Jan 29 11:58:20.608721 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Jan 29 11:58:20.608802 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:58:20.608830 kernel: SELinux: policy capability open_perms=1 Jan 29 11:58:20.608847 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:58:20.608863 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:58:20.608878 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:58:20.608894 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:58:20.608911 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:58:20.608933 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:58:20.608950 kernel: audit: type=1403 audit(1738151898.979:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:58:20.608972 systemd[1]: Successfully loaded SELinux policy in 64.789ms. Jan 29 11:58:20.609005 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.898ms. Jan 29 11:58:20.609023 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:58:20.609041 systemd[1]: Detected virtualization amazon. Jan 29 11:58:20.609177 systemd[1]: Detected architecture x86-64. Jan 29 11:58:20.609200 systemd[1]: Detected first boot. Jan 29 11:58:20.609218 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:58:20.609241 zram_generator::config[1470]: No configuration found. Jan 29 11:58:20.609261 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:58:20.609280 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:58:20.609303 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 29 11:58:20.609364 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:58:20.609382 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:58:20.609404 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:58:20.609422 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:58:20.609443 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:58:20.609461 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:58:20.609478 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:58:20.609494 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:58:20.609512 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:58:20.609529 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:58:20.609547 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:58:20.609564 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:58:20.609581 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:58:20.609602 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:58:20.609619 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 11:58:20.609637 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:58:20.609654 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:58:20.609677 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:58:20.609694 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:58:20.609711 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:58:20.609729 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:58:20.609750 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:58:20.609767 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:58:20.609786 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:58:20.609804 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:58:20.609827 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:58:20.609844 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:58:20.609860 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:58:20.609877 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:58:20.609895 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:58:20.609915 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:58:20.609933 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:58:20.609951 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:58:20.609970 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:58:20.609988 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:58:20.610006 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:58:20.610023 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:58:20.610039 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:58:20.610056 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:58:20.610077 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:58:20.610094 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:58:20.610111 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:58:20.610128 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:58:20.610152 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:58:20.610170 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:58:20.610190 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:58:20.610212 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 29 11:58:20.610235 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 29 11:58:20.610254 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:58:20.610271 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:58:20.610289 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:58:20.610335 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:58:20.610355 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:58:20.610373 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:58:20.610390 kernel: fuse: init (API version 7.39) Jan 29 11:58:20.610407 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:58:20.610428 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:58:20.610445 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:58:20.610462 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:58:20.610480 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:58:20.610504 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:58:20.610565 systemd-journald[1574]: Collecting audit messages is disabled. Jan 29 11:58:20.610604 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:58:20.610627 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:58:20.610645 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:58:20.610662 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:58:20.610682 systemd-journald[1574]: Journal started Jan 29 11:58:20.610722 systemd-journald[1574]: Runtime Journal (/run/log/journal/ec2a044933fef621000f29ea9b65b58d) is 4.8M, max 38.6M, 33.7M free. Jan 29 11:58:20.614406 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:58:20.614469 kernel: loop: module loaded Jan 29 11:58:20.619109 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:58:20.619471 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:58:20.622420 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:58:20.622624 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:58:20.627465 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:58:20.627692 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:58:20.633131 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:58:20.636407 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:58:20.638604 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:58:20.640547 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:58:20.645265 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:58:20.678059 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:58:20.683354 kernel: ACPI: bus type drm_connector registered Jan 29 11:58:20.687445 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:58:20.698470 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:58:20.700037 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:58:20.710582 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:58:20.715041 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:58:20.717482 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:58:20.727584 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:58:20.729068 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:58:20.734783 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:58:20.749506 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:58:20.754392 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:58:20.754895 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:58:20.756672 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:58:20.759551 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:58:20.780428 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:58:20.784470 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:58:20.787519 systemd-journald[1574]: Time spent on flushing to /var/log/journal/ec2a044933fef621000f29ea9b65b58d is 82.665ms for 953 entries. Jan 29 11:58:20.787519 systemd-journald[1574]: System Journal (/var/log/journal/ec2a044933fef621000f29ea9b65b58d) is 8.0M, max 195.6M, 187.6M free. Jan 29 11:58:20.883770 systemd-journald[1574]: Received client request to flush runtime journal. Jan 29 11:58:20.824104 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:58:20.837605 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:58:20.847654 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:58:20.860064 systemd-tmpfiles[1617]: ACLs are not supported, ignoring. Jan 29 11:58:20.860088 systemd-tmpfiles[1617]: ACLs are not supported, ignoring. Jan 29 11:58:20.865079 udevadm[1629]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 11:58:20.875827 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:58:20.889680 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:58:20.892004 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:58:20.970365 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:58:20.979565 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:58:21.009266 systemd-tmpfiles[1641]: ACLs are not supported, ignoring. Jan 29 11:58:21.009297 systemd-tmpfiles[1641]: ACLs are not supported, ignoring. Jan 29 11:58:21.018400 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:58:21.557670 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:58:21.566899 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:58:21.607202 systemd-udevd[1647]: Using default interface naming scheme 'v255'. Jan 29 11:58:21.671298 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:58:21.687182 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:58:21.741561 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:58:21.772101 (udev-worker)[1655]: Network interface NamePolicy= disabled on kernel command line. Jan 29 11:58:21.814699 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 29 11:58:21.864898 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 29 11:58:21.870336 kernel: ACPI: button: Power Button [PWRF] Jan 29 11:58:21.878366 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jan 29 11:58:21.880976 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Jan 29 11:58:21.898610 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:58:21.906341 kernel: ACPI: button: Sleep Button [SLPF] Jan 29 11:58:21.925733 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Jan 29 11:58:21.959352 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 11:58:21.998545 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:58:22.015574 systemd-networkd[1652]: lo: Link UP Jan 29 11:58:22.015585 systemd-networkd[1652]: lo: Gained carrier Jan 29 11:58:22.020910 systemd-networkd[1652]: Enumeration completed Jan 29 11:58:22.022122 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:58:22.022873 systemd-networkd[1652]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:58:22.022880 systemd-networkd[1652]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:58:22.028632 systemd-networkd[1652]: eth0: Link UP Jan 29 11:58:22.028900 systemd-networkd[1652]: eth0: Gained carrier Jan 29 11:58:22.028929 systemd-networkd[1652]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:58:22.030616 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:58:22.040553 systemd-networkd[1652]: eth0: DHCPv4 address 172.31.24.218/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 29 11:58:22.053518 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1655) Jan 29 11:58:22.197993 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:58:22.215000 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 29 11:58:22.222706 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:58:22.252343 lvm[1768]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:58:22.279709 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:58:22.352272 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:58:22.365674 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:58:22.371721 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:58:22.393739 lvm[1773]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:58:22.437000 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:58:22.439750 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:58:22.441435 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:58:22.441471 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:58:22.442615 systemd[1]: Reached target machines.target - Containers. Jan 29 11:58:22.444853 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:58:22.454623 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:58:22.459481 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:58:22.461088 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:58:22.467546 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:58:22.477620 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:58:22.482780 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:58:22.489216 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:58:22.515963 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:58:22.531176 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:58:22.533744 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:58:22.543339 kernel: loop0: detected capacity change from 0 to 140768 Jan 29 11:58:22.657370 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:58:22.679173 kernel: loop1: detected capacity change from 0 to 61336 Jan 29 11:58:22.719338 kernel: loop2: detected capacity change from 0 to 210664 Jan 29 11:58:22.846375 kernel: loop3: detected capacity change from 0 to 142488 Jan 29 11:58:22.933397 kernel: loop4: detected capacity change from 0 to 140768 Jan 29 11:58:22.954356 kernel: loop5: detected capacity change from 0 to 61336 Jan 29 11:58:22.968607 kernel: loop6: detected capacity change from 0 to 210664 Jan 29 11:58:22.997378 kernel: loop7: detected capacity change from 0 to 142488 Jan 29 11:58:23.018051 (sd-merge)[1795]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 29 11:58:23.018767 (sd-merge)[1795]: Merged extensions into '/usr'. Jan 29 11:58:23.023531 systemd[1]: Reloading requested from client PID 1782 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:58:23.023552 systemd[1]: Reloading... Jan 29 11:58:23.129358 zram_generator::config[1819]: No configuration found. Jan 29 11:58:23.336899 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:58:23.441603 systemd[1]: Reloading finished in 417 ms. Jan 29 11:58:23.459854 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:58:23.477596 systemd[1]: Starting ensure-sysext.service... Jan 29 11:58:23.489344 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:58:23.503914 systemd[1]: Reloading requested from client PID 1877 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:58:23.503934 systemd[1]: Reloading... Jan 29 11:58:23.510294 systemd-networkd[1652]: eth0: Gained IPv6LL Jan 29 11:58:23.540717 systemd-tmpfiles[1878]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:58:23.541241 systemd-tmpfiles[1878]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:58:23.543795 systemd-tmpfiles[1878]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:58:23.544344 systemd-tmpfiles[1878]: ACLs are not supported, ignoring. Jan 29 11:58:23.544543 systemd-tmpfiles[1878]: ACLs are not supported, ignoring. Jan 29 11:58:23.555556 systemd-tmpfiles[1878]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:58:23.556185 systemd-tmpfiles[1878]: Skipping /boot Jan 29 11:58:23.585815 systemd-tmpfiles[1878]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:58:23.585834 systemd-tmpfiles[1878]: Skipping /boot Jan 29 11:58:23.644333 zram_generator::config[1906]: No configuration found. Jan 29 11:58:23.805321 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:58:23.895779 systemd[1]: Reloading finished in 391 ms. Jan 29 11:58:23.912411 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:58:23.921074 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:58:23.935589 ldconfig[1778]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:58:23.936509 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 11:58:23.941526 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:58:23.953517 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:58:23.965778 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:58:23.971805 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:58:23.978372 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:58:24.000860 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:58:24.001169 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:58:24.006682 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:58:24.017715 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:58:24.033246 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:58:24.034872 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:58:24.035467 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:58:24.052643 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:58:24.052894 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:58:24.068663 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:58:24.086730 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:58:24.093208 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:58:24.093510 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:58:24.098740 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:58:24.102467 augenrules[2000]: No rules Jan 29 11:58:24.100413 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:58:24.108927 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:58:24.113277 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:58:24.113507 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:58:24.122268 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:58:24.123612 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:58:24.126065 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 11:58:24.132074 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:58:24.132324 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:58:24.144243 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:58:24.145062 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:58:24.161815 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:58:24.162228 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:58:24.173402 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:58:24.174219 systemd-resolved[1973]: Positive Trust Anchors: Jan 29 11:58:24.174233 systemd-resolved[1973]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:58:24.174283 systemd-resolved[1973]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:58:24.187459 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:58:24.192671 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:58:24.202987 systemd-resolved[1973]: Defaulting to hostname 'linux'. Jan 29 11:58:24.211465 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:58:24.212985 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:58:24.213288 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:58:24.214929 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:58:24.216367 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:58:24.218769 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:58:24.222830 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:58:24.224833 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:58:24.225134 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:58:24.228226 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:58:24.228786 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:58:24.231304 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:58:24.231483 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:58:24.233891 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:58:24.234088 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:58:24.240757 systemd[1]: Reached target network.target - Network. Jan 29 11:58:24.242664 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:58:24.250053 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:58:24.254351 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:58:24.254451 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:58:24.254485 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:58:24.254526 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:58:24.256560 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:58:24.257999 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:58:24.259497 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:58:24.260760 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:58:24.262120 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:58:24.263466 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:58:24.263515 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:58:24.265239 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:58:24.267790 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:58:24.271143 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:58:24.275607 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:58:24.278219 systemd[1]: Finished ensure-sysext.service. Jan 29 11:58:24.279633 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:58:24.283599 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:58:24.285837 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:58:24.288999 systemd[1]: System is tainted: cgroupsv1 Jan 29 11:58:24.289067 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:58:24.289103 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:58:24.297551 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:58:24.307753 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 11:58:24.315529 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:58:24.317564 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:58:24.331607 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:58:24.333086 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:58:24.339676 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:58:24.355341 jq[2041]: false Jan 29 11:58:24.363416 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:58:24.397917 systemd[1]: Started ntpd.service - Network Time Service. Jan 29 11:58:24.426467 extend-filesystems[2042]: Found loop4 Jan 29 11:58:24.426467 extend-filesystems[2042]: Found loop5 Jan 29 11:58:24.444724 extend-filesystems[2042]: Found loop6 Jan 29 11:58:24.444724 extend-filesystems[2042]: Found loop7 Jan 29 11:58:24.444724 extend-filesystems[2042]: Found nvme0n1 Jan 29 11:58:24.444724 extend-filesystems[2042]: Found nvme0n1p1 Jan 29 11:58:24.444724 extend-filesystems[2042]: Found nvme0n1p2 Jan 29 11:58:24.444724 extend-filesystems[2042]: Found nvme0n1p3 Jan 29 11:58:24.444724 extend-filesystems[2042]: Found usr Jan 29 11:58:24.444724 extend-filesystems[2042]: Found nvme0n1p4 Jan 29 11:58:24.444724 extend-filesystems[2042]: Found nvme0n1p6 Jan 29 11:58:24.444724 extend-filesystems[2042]: Found nvme0n1p7 Jan 29 11:58:24.444724 extend-filesystems[2042]: Found nvme0n1p9 Jan 29 11:58:24.444724 extend-filesystems[2042]: Checking size of /dev/nvme0n1p9 Jan 29 11:58:24.431592 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:58:24.462447 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:58:24.482451 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 29 11:58:24.492467 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:58:24.505882 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:58:24.531126 dbus-daemon[2040]: [system] SELinux support is enabled Jan 29 11:58:24.536746 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:58:24.540362 dbus-daemon[2040]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1652 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 29 11:58:24.542064 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:58:24.556251 extend-filesystems[2042]: Resized partition /dev/nvme0n1p9 Jan 29 11:58:24.563587 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:58:24.590276 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:58:24.596969 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:58:24.600277 extend-filesystems[2078]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:58:24.607333 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 29 11:58:24.617535 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:58:24.617891 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:58:24.620969 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:58:24.621339 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:58:24.624587 jq[2076]: true Jan 29 11:58:24.629612 ntpd[2047]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:52 UTC 2025 (1): Starting Jan 29 11:58:24.644011 ntpd[2047]: 29 Jan 11:58:24 ntpd[2047]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:52 UTC 2025 (1): Starting Jan 29 11:58:24.644011 ntpd[2047]: 29 Jan 11:58:24 ntpd[2047]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 29 11:58:24.644011 ntpd[2047]: 29 Jan 11:58:24 ntpd[2047]: ---------------------------------------------------- Jan 29 11:58:24.644011 ntpd[2047]: 29 Jan 11:58:24 ntpd[2047]: ntp-4 is maintained by Network Time Foundation, Jan 29 11:58:24.644011 ntpd[2047]: 29 Jan 11:58:24 ntpd[2047]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 29 11:58:24.644011 ntpd[2047]: 29 Jan 11:58:24 ntpd[2047]: corporation. Support and training for ntp-4 are Jan 29 11:58:24.644011 ntpd[2047]: 29 Jan 11:58:24 ntpd[2047]: available at https://www.nwtime.org/support Jan 29 11:58:24.644011 ntpd[2047]: 29 Jan 11:58:24 ntpd[2047]: ---------------------------------------------------- Jan 29 11:58:24.629641 ntpd[2047]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 29 11:58:24.655074 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:58:24.656087 ntpd[2047]: 29 Jan 11:58:24 ntpd[2047]: proto: precision = 0.071 usec (-24) Jan 29 11:58:24.656087 ntpd[2047]: 29 Jan 11:58:24 ntpd[2047]: basedate set to 2025-01-17 Jan 29 11:58:24.656087 ntpd[2047]: 29 Jan 11:58:24 ntpd[2047]: gps base set to 2025-01-19 (week 2350) Jan 29 11:58:24.629651 ntpd[2047]: ---------------------------------------------------- Jan 29 11:58:24.629660 ntpd[2047]: ntp-4 is maintained by Network Time Foundation, Jan 29 11:58:24.629669 ntpd[2047]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 29 11:58:24.629678 ntpd[2047]: corporation. Support and training for ntp-4 are Jan 29 11:58:24.660599 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:58:24.629687 ntpd[2047]: available at https://www.nwtime.org/support Jan 29 11:58:24.629696 ntpd[2047]: ---------------------------------------------------- Jan 29 11:58:24.647092 ntpd[2047]: proto: precision = 0.071 usec (-24) Jan 29 11:58:24.653899 ntpd[2047]: basedate set to 2025-01-17 Jan 29 11:58:24.653922 ntpd[2047]: gps base set to 2025-01-19 (week 2350) Jan 29 11:58:24.666618 ntpd[2047]: Listen and drop on 0 v6wildcard [::]:123 Jan 29 11:58:24.671143 ntpd[2047]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 29 11:58:24.674853 ntpd[2047]: 29 Jan 11:58:24 ntpd[2047]: Listen and drop on 0 v6wildcard [::]:123 Jan 29 11:58:24.674853 ntpd[2047]: 29 Jan 11:58:24 ntpd[2047]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 29 11:58:24.678324 ntpd[2047]: 29 Jan 11:58:24 ntpd[2047]: Listen normally on 2 lo 127.0.0.1:123 Jan 29 11:58:24.678324 ntpd[2047]: 29 Jan 11:58:24 ntpd[2047]: Listen normally on 3 eth0 172.31.24.218:123 Jan 29 11:58:24.678324 ntpd[2047]: 29 Jan 11:58:24 ntpd[2047]: Listen normally on 4 lo [::1]:123 Jan 29 11:58:24.678324 ntpd[2047]: 29 Jan 11:58:24 ntpd[2047]: Listen normally on 5 eth0 [fe80::4e3:49ff:feed:409b%2]:123 Jan 29 11:58:24.675146 ntpd[2047]: Listen normally on 2 lo 127.0.0.1:123 Jan 29 11:58:24.675199 ntpd[2047]: Listen normally on 3 eth0 172.31.24.218:123 Jan 29 11:58:24.680995 ntpd[2047]: 29 Jan 11:58:24 ntpd[2047]: Listening on routing socket on fd #22 for interface updates Jan 29 11:58:24.675246 ntpd[2047]: Listen normally on 4 lo [::1]:123 Jan 29 11:58:24.675299 ntpd[2047]: Listen normally on 5 eth0 [fe80::4e3:49ff:feed:409b%2]:123 Jan 29 11:58:24.679911 ntpd[2047]: Listening on routing socket on fd #22 for interface updates Jan 29 11:58:24.692021 update_engine[2074]: I20250129 11:58:24.688593 2074 main.cc:92] Flatcar Update Engine starting Jan 29 11:58:24.717619 update_engine[2074]: I20250129 11:58:24.693387 2074 update_check_scheduler.cc:74] Next update check in 11m16s Jan 29 11:58:24.717670 ntpd[2047]: 29 Jan 11:58:24 ntpd[2047]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 11:58:24.717670 ntpd[2047]: 29 Jan 11:58:24 ntpd[2047]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 11:58:24.692258 ntpd[2047]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 11:58:24.692296 ntpd[2047]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 11:58:24.731258 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:58:24.735340 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 29 11:58:24.745467 (ntainerd)[2099]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:58:24.749362 extend-filesystems[2078]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 29 11:58:24.749362 extend-filesystems[2078]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 11:58:24.749362 extend-filesystems[2078]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 29 11:58:24.755476 extend-filesystems[2042]: Resized filesystem in /dev/nvme0n1p9 Jan 29 11:58:24.761015 coreos-metadata[2039]: Jan 29 11:58:24.760 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 29 11:58:24.765586 coreos-metadata[2039]: Jan 29 11:58:24.762 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 29 11:58:24.765586 coreos-metadata[2039]: Jan 29 11:58:24.763 INFO Fetch successful Jan 29 11:58:24.765586 coreos-metadata[2039]: Jan 29 11:58:24.763 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 29 11:58:24.762208 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:58:24.765628 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:58:24.775955 coreos-metadata[2039]: Jan 29 11:58:24.769 INFO Fetch successful Jan 29 11:58:24.775955 coreos-metadata[2039]: Jan 29 11:58:24.769 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 29 11:58:24.775955 coreos-metadata[2039]: Jan 29 11:58:24.770 INFO Fetch successful Jan 29 11:58:24.775955 coreos-metadata[2039]: Jan 29 11:58:24.770 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 29 11:58:24.775955 coreos-metadata[2039]: Jan 29 11:58:24.770 INFO Fetch successful Jan 29 11:58:24.775955 coreos-metadata[2039]: Jan 29 11:58:24.770 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 29 11:58:24.775955 coreos-metadata[2039]: Jan 29 11:58:24.773 INFO Fetch failed with 404: resource not found Jan 29 11:58:24.775955 coreos-metadata[2039]: Jan 29 11:58:24.773 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 29 11:58:24.795339 coreos-metadata[2039]: Jan 29 11:58:24.778 INFO Fetch successful Jan 29 11:58:24.795339 coreos-metadata[2039]: Jan 29 11:58:24.778 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 29 11:58:24.795339 coreos-metadata[2039]: Jan 29 11:58:24.781 INFO Fetch successful Jan 29 11:58:24.795339 coreos-metadata[2039]: Jan 29 11:58:24.781 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 29 11:58:24.795339 coreos-metadata[2039]: Jan 29 11:58:24.782 INFO Fetch successful Jan 29 11:58:24.795339 coreos-metadata[2039]: Jan 29 11:58:24.782 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 29 11:58:24.795339 coreos-metadata[2039]: Jan 29 11:58:24.782 INFO Fetch successful Jan 29 11:58:24.795339 coreos-metadata[2039]: Jan 29 11:58:24.785 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 29 11:58:24.795339 coreos-metadata[2039]: Jan 29 11:58:24.786 INFO Fetch successful Jan 29 11:58:24.794233 dbus-daemon[2040]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 29 11:58:24.806195 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:58:24.819010 jq[2093]: true Jan 29 11:58:24.847600 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (2106) Jan 29 11:58:24.872626 tar[2090]: linux-amd64/helm Jan 29 11:58:24.876757 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:58:24.876805 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:58:24.892498 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 29 11:58:24.894089 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:58:24.894121 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:58:24.914558 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:58:24.918731 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:58:24.923799 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 29 11:58:24.941006 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 11:58:25.025885 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 29 11:58:25.027581 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:58:25.043322 systemd-logind[2069]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 11:58:25.047625 systemd-logind[2069]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 29 11:58:25.047655 systemd-logind[2069]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 11:58:25.047882 systemd-logind[2069]: New seat seat0. Jan 29 11:58:25.053449 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:58:25.076681 bash[2178]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:58:25.077658 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:58:25.093746 systemd[1]: Starting sshkeys.service... Jan 29 11:58:25.162023 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 11:58:25.173858 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 11:58:25.462383 amazon-ssm-agent[2163]: Initializing new seelog logger Jan 29 11:58:25.462383 amazon-ssm-agent[2163]: New Seelog Logger Creation Complete Jan 29 11:58:25.462383 amazon-ssm-agent[2163]: 2025/01/29 11:58:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 11:58:25.462383 amazon-ssm-agent[2163]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 11:58:25.462383 amazon-ssm-agent[2163]: 2025/01/29 11:58:25 processing appconfig overrides Jan 29 11:58:25.493423 amazon-ssm-agent[2163]: 2025/01/29 11:58:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 11:58:25.493423 amazon-ssm-agent[2163]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 11:58:25.493423 amazon-ssm-agent[2163]: 2025/01/29 11:58:25 processing appconfig overrides Jan 29 11:58:25.493423 amazon-ssm-agent[2163]: 2025/01/29 11:58:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 11:58:25.493423 amazon-ssm-agent[2163]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 11:58:25.505335 amazon-ssm-agent[2163]: 2025/01/29 11:58:25 processing appconfig overrides Jan 29 11:58:25.523663 amazon-ssm-agent[2163]: 2025-01-29 11:58:25 INFO Proxy environment variables: Jan 29 11:58:25.533406 amazon-ssm-agent[2163]: 2025/01/29 11:58:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 11:58:25.533406 amazon-ssm-agent[2163]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 11:58:25.533406 amazon-ssm-agent[2163]: 2025/01/29 11:58:25 processing appconfig overrides Jan 29 11:58:25.608655 coreos-metadata[2206]: Jan 29 11:58:25.608 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 29 11:58:25.617340 coreos-metadata[2206]: Jan 29 11:58:25.616 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 29 11:58:25.626608 coreos-metadata[2206]: Jan 29 11:58:25.624 INFO Fetch successful Jan 29 11:58:25.626608 coreos-metadata[2206]: Jan 29 11:58:25.624 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 29 11:58:25.630040 coreos-metadata[2206]: Jan 29 11:58:25.627 INFO Fetch successful Jan 29 11:58:25.633498 amazon-ssm-agent[2163]: 2025-01-29 11:58:25 INFO https_proxy: Jan 29 11:58:25.639216 unknown[2206]: wrote ssh authorized keys file for user: core Jan 29 11:58:25.723960 dbus-daemon[2040]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 29 11:58:25.731130 dbus-daemon[2040]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=2139 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 29 11:58:25.733227 amazon-ssm-agent[2163]: 2025-01-29 11:58:25 INFO http_proxy: Jan 29 11:58:25.733547 locksmithd[2143]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:58:25.748370 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 29 11:58:25.758755 systemd[1]: Starting polkit.service - Authorization Manager... Jan 29 11:58:25.799375 update-ssh-keys[2278]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:58:25.805728 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 11:58:25.817658 systemd[1]: Finished sshkeys.service. Jan 29 11:58:25.838891 polkitd[2280]: Started polkitd version 121 Jan 29 11:58:25.847483 amazon-ssm-agent[2163]: 2025-01-29 11:58:25 INFO no_proxy: Jan 29 11:58:25.855964 polkitd[2280]: Loading rules from directory /etc/polkit-1/rules.d Jan 29 11:58:25.856053 polkitd[2280]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 29 11:58:25.875765 polkitd[2280]: Finished loading, compiling and executing 2 rules Jan 29 11:58:25.878853 dbus-daemon[2040]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 29 11:58:25.879071 systemd[1]: Started polkit.service - Authorization Manager. Jan 29 11:58:25.885953 polkitd[2280]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 29 11:58:25.926542 systemd-resolved[1973]: System hostname changed to 'ip-172-31-24-218'. Jan 29 11:58:25.926695 systemd-hostnamed[2139]: Hostname set to <ip-172-31-24-218> (transient) Jan 29 11:58:25.945784 amazon-ssm-agent[2163]: 2025-01-29 11:58:25 INFO Checking if agent identity type OnPrem can be assumed Jan 29 11:58:25.972348 containerd[2099]: time="2025-01-29T11:58:25.971594713Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 29 11:58:26.043398 amazon-ssm-agent[2163]: 2025-01-29 11:58:25 INFO Checking if agent identity type EC2 can be assumed Jan 29 11:58:26.079978 sshd_keygen[2095]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:58:26.139656 containerd[2099]: time="2025-01-29T11:58:26.139565134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:58:26.142353 containerd[2099]: time="2025-01-29T11:58:26.142277247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:58:26.142353 containerd[2099]: time="2025-01-29T11:58:26.142352492Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:58:26.142484 containerd[2099]: time="2025-01-29T11:58:26.142377935Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:58:26.142579 containerd[2099]: time="2025-01-29T11:58:26.142558147Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:58:26.142623 containerd[2099]: time="2025-01-29T11:58:26.142587860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:58:26.142702 containerd[2099]: time="2025-01-29T11:58:26.142681753Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:58:26.142742 containerd[2099]: time="2025-01-29T11:58:26.142708450Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:58:26.143202 amazon-ssm-agent[2163]: 2025-01-29 11:58:25 INFO Agent will take identity from EC2 Jan 29 11:58:26.144611 containerd[2099]: time="2025-01-29T11:58:26.144575453Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:58:26.144675 containerd[2099]: time="2025-01-29T11:58:26.144612240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:58:26.144675 containerd[2099]: time="2025-01-29T11:58:26.144633781Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:58:26.144675 containerd[2099]: time="2025-01-29T11:58:26.144648878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:58:26.144788 containerd[2099]: time="2025-01-29T11:58:26.144774089Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:58:26.145055 containerd[2099]: time="2025-01-29T11:58:26.145026412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:58:26.148032 containerd[2099]: time="2025-01-29T11:58:26.147055292Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:58:26.148032 containerd[2099]: time="2025-01-29T11:58:26.147089004Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:58:26.148032 containerd[2099]: time="2025-01-29T11:58:26.147214996Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:58:26.148032 containerd[2099]: time="2025-01-29T11:58:26.147272110Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:58:26.161869 containerd[2099]: time="2025-01-29T11:58:26.161823019Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:58:26.162100 containerd[2099]: time="2025-01-29T11:58:26.161915837Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:58:26.162100 containerd[2099]: time="2025-01-29T11:58:26.162082547Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:58:26.162199 containerd[2099]: time="2025-01-29T11:58:26.162112994Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:58:26.162199 containerd[2099]: time="2025-01-29T11:58:26.162146340Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:58:26.162364 containerd[2099]: time="2025-01-29T11:58:26.162344531Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:58:26.162851 containerd[2099]: time="2025-01-29T11:58:26.162827895Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:58:26.162988 containerd[2099]: time="2025-01-29T11:58:26.162968055Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:58:26.163033 containerd[2099]: time="2025-01-29T11:58:26.162996069Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:58:26.163033 containerd[2099]: time="2025-01-29T11:58:26.163015924Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:58:26.163216 containerd[2099]: time="2025-01-29T11:58:26.163038142Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:58:26.163216 containerd[2099]: time="2025-01-29T11:58:26.163166417Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:58:26.163216 containerd[2099]: time="2025-01-29T11:58:26.163193389Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:58:26.163216 containerd[2099]: time="2025-01-29T11:58:26.163213856Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:58:26.163365 containerd[2099]: time="2025-01-29T11:58:26.163247730Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:58:26.163365 containerd[2099]: time="2025-01-29T11:58:26.163267243Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:58:26.163365 containerd[2099]: time="2025-01-29T11:58:26.163287008Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:58:26.163365 containerd[2099]: time="2025-01-29T11:58:26.163304760Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:58:26.163365 containerd[2099]: time="2025-01-29T11:58:26.163346547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:58:26.163601 containerd[2099]: time="2025-01-29T11:58:26.163370588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:58:26.163601 containerd[2099]: time="2025-01-29T11:58:26.163402188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:58:26.163601 containerd[2099]: time="2025-01-29T11:58:26.163424693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:58:26.163601 containerd[2099]: time="2025-01-29T11:58:26.163443092Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:58:26.163601 containerd[2099]: time="2025-01-29T11:58:26.163513278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:58:26.163601 containerd[2099]: time="2025-01-29T11:58:26.163533572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:58:26.163601 containerd[2099]: time="2025-01-29T11:58:26.163552793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:58:26.163601 containerd[2099]: time="2025-01-29T11:58:26.163572398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:58:26.163601 containerd[2099]: time="2025-01-29T11:58:26.163594202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:58:26.163917 containerd[2099]: time="2025-01-29T11:58:26.163611186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:58:26.163917 containerd[2099]: time="2025-01-29T11:58:26.163631320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:58:26.163917 containerd[2099]: time="2025-01-29T11:58:26.163657437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:58:26.163917 containerd[2099]: time="2025-01-29T11:58:26.163679731Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:58:26.163917 containerd[2099]: time="2025-01-29T11:58:26.163716182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:58:26.163917 containerd[2099]: time="2025-01-29T11:58:26.163734094Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:58:26.163917 containerd[2099]: time="2025-01-29T11:58:26.163750553Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:58:26.163917 containerd[2099]: time="2025-01-29T11:58:26.163799898Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:58:26.163917 containerd[2099]: time="2025-01-29T11:58:26.163825537Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:58:26.163917 containerd[2099]: time="2025-01-29T11:58:26.163842619Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:58:26.163917 containerd[2099]: time="2025-01-29T11:58:26.163860611Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:58:26.163917 containerd[2099]: time="2025-01-29T11:58:26.163877064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:58:26.163917 containerd[2099]: time="2025-01-29T11:58:26.163896152Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:58:26.163917 containerd[2099]: time="2025-01-29T11:58:26.163915828Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:58:26.164416 containerd[2099]: time="2025-01-29T11:58:26.163930276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:58:26.170345 containerd[2099]: time="2025-01-29T11:58:26.168354781Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:58:26.170345 containerd[2099]: time="2025-01-29T11:58:26.168556042Z" level=info msg="Connect containerd service" Jan 29 11:58:26.170345 containerd[2099]: time="2025-01-29T11:58:26.168608930Z" level=info msg="using legacy CRI server" Jan 29 11:58:26.170345 containerd[2099]: time="2025-01-29T11:58:26.168619413Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:58:26.170345 containerd[2099]: time="2025-01-29T11:58:26.168761955Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:58:26.170345 containerd[2099]: time="2025-01-29T11:58:26.169550674Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:58:26.170345 containerd[2099]: time="2025-01-29T11:58:26.169969586Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:58:26.170345 containerd[2099]: time="2025-01-29T11:58:26.170020013Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:58:26.170345 containerd[2099]: time="2025-01-29T11:58:26.170082127Z" level=info msg="Start subscribing containerd event" Jan 29 11:58:26.170345 containerd[2099]: time="2025-01-29T11:58:26.170127239Z" level=info msg="Start recovering state" Jan 29 11:58:26.170345 containerd[2099]: time="2025-01-29T11:58:26.170197736Z" level=info msg="Start event monitor" Jan 29 11:58:26.170345 containerd[2099]: time="2025-01-29T11:58:26.170210932Z" level=info msg="Start snapshots syncer" Jan 29 11:58:26.170345 containerd[2099]: time="2025-01-29T11:58:26.170222707Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:58:26.170345 containerd[2099]: time="2025-01-29T11:58:26.170232538Z" level=info msg="Start streaming server" Jan 29 11:58:26.170456 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:58:26.175424 containerd[2099]: time="2025-01-29T11:58:26.174926585Z" level=info msg="containerd successfully booted in 0.205420s" Jan 29 11:58:26.190000 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:58:26.200659 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:58:26.233979 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:58:26.234298 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:58:26.242436 amazon-ssm-agent[2163]: 2025-01-29 11:58:25 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 11:58:26.244635 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:58:26.298468 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:58:26.307732 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:58:26.320926 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 11:58:26.322978 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:58:26.341817 amazon-ssm-agent[2163]: 2025-01-29 11:58:25 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 11:58:26.443332 amazon-ssm-agent[2163]: 2025-01-29 11:58:25 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 11:58:26.490575 tar[2090]: linux-amd64/LICENSE Jan 29 11:58:26.490575 tar[2090]: linux-amd64/README.md Jan 29 11:58:26.511053 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:58:26.540760 amazon-ssm-agent[2163]: 2025-01-29 11:58:25 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 29 11:58:26.641613 amazon-ssm-agent[2163]: 2025-01-29 11:58:25 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 29 11:58:26.741591 amazon-ssm-agent[2163]: 2025-01-29 11:58:25 INFO [amazon-ssm-agent] Starting Core Agent Jan 29 11:58:26.792796 amazon-ssm-agent[2163]: 2025-01-29 11:58:25 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 29 11:58:26.792796 amazon-ssm-agent[2163]: 2025-01-29 11:58:25 INFO [Registrar] Starting registrar module Jan 29 11:58:26.792796 amazon-ssm-agent[2163]: 2025-01-29 11:58:25 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 29 11:58:26.792796 amazon-ssm-agent[2163]: 2025-01-29 11:58:26 INFO [EC2Identity] EC2 registration was successful. Jan 29 11:58:26.792796 amazon-ssm-agent[2163]: 2025-01-29 11:58:26 INFO [CredentialRefresher] credentialRefresher has started Jan 29 11:58:26.792796 amazon-ssm-agent[2163]: 2025-01-29 11:58:26 INFO [CredentialRefresher] Starting credentials refresher loop Jan 29 11:58:26.792796 amazon-ssm-agent[2163]: 2025-01-29 11:58:26 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 29 11:58:26.841975 amazon-ssm-agent[2163]: 2025-01-29 11:58:26 INFO [CredentialRefresher] Next credential rotation will be in 30.749994029933333 minutes Jan 29 11:58:27.006035 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:58:27.016220 systemd[1]: Started sshd@0-172.31.24.218:22-139.178.68.195:58016.service - OpenSSH per-connection server daemon (139.178.68.195:58016). Jan 29 11:58:27.232283 sshd[2324]: Accepted publickey for core from 139.178.68.195 port 58016 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 11:58:27.235738 sshd[2324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:58:27.247853 systemd-logind[2069]: New session 1 of user core. Jan 29 11:58:27.249297 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:58:27.258851 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:58:27.277591 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:58:27.290710 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:58:27.296135 (systemd)[2330]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:58:27.434821 systemd[2330]: Queued start job for default target default.target. Jan 29 11:58:27.435325 systemd[2330]: Created slice app.slice - User Application Slice. Jan 29 11:58:27.435357 systemd[2330]: Reached target paths.target - Paths. Jan 29 11:58:27.435376 systemd[2330]: Reached target timers.target - Timers. Jan 29 11:58:27.441449 systemd[2330]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:58:27.450567 systemd[2330]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:58:27.450773 systemd[2330]: Reached target sockets.target - Sockets. Jan 29 11:58:27.450794 systemd[2330]: Reached target basic.target - Basic System. Jan 29 11:58:27.451811 systemd[2330]: Reached target default.target - Main User Target. Jan 29 11:58:27.451882 systemd[2330]: Startup finished in 148ms. Jan 29 11:58:27.452075 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:58:27.457625 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:58:27.575555 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:58:27.579804 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:58:27.586539 systemd[1]: Startup finished in 9.868s (kernel) + 8.669s (userspace) = 18.537s. Jan 29 11:58:27.654451 systemd[1]: Started sshd@1-172.31.24.218:22-139.178.68.195:58030.service - OpenSSH per-connection server daemon (139.178.68.195:58030). Jan 29 11:58:27.728950 (kubelet)[2349]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:58:27.820688 amazon-ssm-agent[2163]: 2025-01-29 11:58:27 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 29 11:58:27.890689 sshd[2351]: Accepted publickey for core from 139.178.68.195 port 58030 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 11:58:27.892256 sshd[2351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:58:27.902094 systemd-logind[2069]: New session 2 of user core. Jan 29 11:58:27.909666 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:58:27.921405 amazon-ssm-agent[2163]: 2025-01-29 11:58:27 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2358) started Jan 29 11:58:28.022885 amazon-ssm-agent[2163]: 2025-01-29 11:58:27 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 29 11:58:28.037501 sshd[2351]: pam_unix(sshd:session): session closed for user core Jan 29 11:58:28.044719 systemd[1]: sshd@1-172.31.24.218:22-139.178.68.195:58030.service: Deactivated successfully. Jan 29 11:58:28.048474 systemd-logind[2069]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:58:28.049217 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:58:28.052405 systemd-logind[2069]: Removed session 2. Jan 29 11:58:28.065774 systemd[1]: Started sshd@2-172.31.24.218:22-139.178.68.195:58038.service - OpenSSH per-connection server daemon (139.178.68.195:58038). Jan 29 11:58:28.221929 sshd[2379]: Accepted publickey for core from 139.178.68.195 port 58038 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 11:58:28.223630 sshd[2379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:58:28.228869 systemd-logind[2069]: New session 3 of user core. Jan 29 11:58:28.234696 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:58:28.350524 sshd[2379]: pam_unix(sshd:session): session closed for user core Jan 29 11:58:28.355405 systemd[1]: sshd@2-172.31.24.218:22-139.178.68.195:58038.service: Deactivated successfully. Jan 29 11:58:28.360089 systemd-logind[2069]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:58:28.360818 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:58:28.363132 systemd-logind[2069]: Removed session 3. Jan 29 11:58:28.379135 systemd[1]: Started sshd@3-172.31.24.218:22-139.178.68.195:58048.service - OpenSSH per-connection server daemon (139.178.68.195:58048). Jan 29 11:58:28.550127 sshd[2387]: Accepted publickey for core from 139.178.68.195 port 58048 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 11:58:28.551140 sshd[2387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:58:28.557972 systemd-logind[2069]: New session 4 of user core. Jan 29 11:58:28.560840 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:58:28.686114 sshd[2387]: pam_unix(sshd:session): session closed for user core Jan 29 11:58:28.692327 systemd[1]: sshd@3-172.31.24.218:22-139.178.68.195:58048.service: Deactivated successfully. Jan 29 11:58:28.697457 systemd-logind[2069]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:58:28.698397 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:58:28.701170 systemd-logind[2069]: Removed session 4. Jan 29 11:58:28.717021 systemd[1]: Started sshd@4-172.31.24.218:22-139.178.68.195:58052.service - OpenSSH per-connection server daemon (139.178.68.195:58052). Jan 29 11:58:28.885796 sshd[2395]: Accepted publickey for core from 139.178.68.195 port 58052 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 11:58:28.887728 sshd[2395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:58:28.896525 systemd-logind[2069]: New session 5 of user core. Jan 29 11:58:28.903816 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:58:28.987935 kubelet[2349]: E0129 11:58:28.987868 2349 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:58:28.990938 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:58:28.991378 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:58:29.037645 sudo[2402]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:58:29.038049 sudo[2402]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:58:29.054557 sudo[2402]: pam_unix(sudo:session): session closed for user root Jan 29 11:58:29.078144 sshd[2395]: pam_unix(sshd:session): session closed for user core Jan 29 11:58:29.082214 systemd[1]: sshd@4-172.31.24.218:22-139.178.68.195:58052.service: Deactivated successfully. Jan 29 11:58:29.087006 systemd-logind[2069]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:58:29.088719 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:58:29.090749 systemd-logind[2069]: Removed session 5. Jan 29 11:58:29.104912 systemd[1]: Started sshd@5-172.31.24.218:22-139.178.68.195:58056.service - OpenSSH per-connection server daemon (139.178.68.195:58056). Jan 29 11:58:29.259253 sshd[2407]: Accepted publickey for core from 139.178.68.195 port 58056 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 11:58:29.260858 sshd[2407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:58:29.265427 systemd-logind[2069]: New session 6 of user core. Jan 29 11:58:29.275667 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:58:29.373570 sudo[2412]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:58:29.373959 sudo[2412]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:58:29.377849 sudo[2412]: pam_unix(sudo:session): session closed for user root Jan 29 11:58:29.383763 sudo[2411]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 29 11:58:29.384148 sudo[2411]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:58:29.404973 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 29 11:58:29.407084 auditctl[2415]: No rules Jan 29 11:58:29.407569 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:58:29.407893 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 29 11:58:29.422108 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 11:58:29.450760 augenrules[2434]: No rules Jan 29 11:58:29.452615 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 11:58:29.454568 sudo[2411]: pam_unix(sudo:session): session closed for user root Jan 29 11:58:29.478055 sshd[2407]: pam_unix(sshd:session): session closed for user core Jan 29 11:58:29.482724 systemd[1]: sshd@5-172.31.24.218:22-139.178.68.195:58056.service: Deactivated successfully. Jan 29 11:58:29.487358 systemd-logind[2069]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:58:29.488632 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:58:29.489625 systemd-logind[2069]: Removed session 6. Jan 29 11:58:29.504694 systemd[1]: Started sshd@6-172.31.24.218:22-139.178.68.195:58070.service - OpenSSH per-connection server daemon (139.178.68.195:58070). Jan 29 11:58:29.659015 sshd[2443]: Accepted publickey for core from 139.178.68.195 port 58070 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 11:58:29.659833 sshd[2443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:58:29.669809 systemd-logind[2069]: New session 7 of user core. Jan 29 11:58:29.682716 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:58:29.787522 sudo[2447]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:58:29.787919 sudo[2447]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:58:30.394667 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 11:58:30.397597 (dockerd)[2463]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 11:58:30.967941 dockerd[2463]: time="2025-01-29T11:58:30.967845476Z" level=info msg="Starting up" Jan 29 11:58:31.109191 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport275130465-merged.mount: Deactivated successfully. Jan 29 11:58:31.427073 dockerd[2463]: time="2025-01-29T11:58:31.426950306Z" level=info msg="Loading containers: start." Jan 29 11:58:31.603406 kernel: Initializing XFRM netlink socket Jan 29 11:58:31.661513 (udev-worker)[2483]: Network interface NamePolicy= disabled on kernel command line. Jan 29 11:58:31.750062 systemd-networkd[1652]: docker0: Link UP Jan 29 11:58:31.774010 dockerd[2463]: time="2025-01-29T11:58:31.773968482Z" level=info msg="Loading containers: done." Jan 29 11:58:31.827909 dockerd[2463]: time="2025-01-29T11:58:31.827855646Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 11:58:31.830348 dockerd[2463]: time="2025-01-29T11:58:31.828349360Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 29 11:58:31.830348 dockerd[2463]: time="2025-01-29T11:58:31.828584805Z" level=info msg="Daemon has completed initialization" Jan 29 11:58:31.875042 dockerd[2463]: time="2025-01-29T11:58:31.874801219Z" level=info msg="API listen on /run/docker.sock" Jan 29 11:58:31.875782 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 11:58:33.263106 containerd[2099]: time="2025-01-29T11:58:33.263053338Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 29 11:58:33.891039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3480917396.mount: Deactivated successfully. Jan 29 11:58:35.565303 containerd[2099]: time="2025-01-29T11:58:35.565252614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:35.566529 containerd[2099]: time="2025-01-29T11:58:35.566479332Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 29 11:58:35.567378 containerd[2099]: time="2025-01-29T11:58:35.567343463Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:35.571500 containerd[2099]: time="2025-01-29T11:58:35.571159428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:35.572270 containerd[2099]: time="2025-01-29T11:58:35.572233684Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 2.309136387s" Jan 29 11:58:35.572372 containerd[2099]: time="2025-01-29T11:58:35.572281380Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 29 11:58:35.596742 containerd[2099]: time="2025-01-29T11:58:35.596222200Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 11:58:37.384456 containerd[2099]: time="2025-01-29T11:58:37.384400549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:37.385754 containerd[2099]: time="2025-01-29T11:58:37.385704800Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 29 11:58:37.386796 containerd[2099]: time="2025-01-29T11:58:37.386734181Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:37.392689 containerd[2099]: time="2025-01-29T11:58:37.392624677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:37.393843 containerd[2099]: time="2025-01-29T11:58:37.393682758Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 1.796771426s" Jan 29 11:58:37.393843 containerd[2099]: time="2025-01-29T11:58:37.393726354Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 29 11:58:37.417803 containerd[2099]: time="2025-01-29T11:58:37.417759103Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 29 11:58:38.767432 containerd[2099]: time="2025-01-29T11:58:38.767381990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:38.768862 containerd[2099]: time="2025-01-29T11:58:38.768806485Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 29 11:58:38.770175 containerd[2099]: time="2025-01-29T11:58:38.770122088Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:38.773587 containerd[2099]: time="2025-01-29T11:58:38.773477496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:38.774872 containerd[2099]: time="2025-01-29T11:58:38.774836682Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.357038902s" Jan 29 11:58:38.775232 containerd[2099]: time="2025-01-29T11:58:38.775120793Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 29 11:58:38.802773 containerd[2099]: time="2025-01-29T11:58:38.802729226Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 11:58:39.199575 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:58:39.212875 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:58:39.567544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:58:39.569533 (kubelet)[2699]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:58:39.671795 kubelet[2699]: E0129 11:58:39.671754 2699 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:58:39.679660 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:58:39.680086 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:58:40.070496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2667465026.mount: Deactivated successfully. Jan 29 11:58:40.648834 containerd[2099]: time="2025-01-29T11:58:40.648777860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:40.651692 containerd[2099]: time="2025-01-29T11:58:40.651514983Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 29 11:58:40.653841 containerd[2099]: time="2025-01-29T11:58:40.652757933Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:40.655339 containerd[2099]: time="2025-01-29T11:58:40.655191161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:40.656043 containerd[2099]: time="2025-01-29T11:58:40.655852635Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.853079026s" Jan 29 11:58:40.656043 containerd[2099]: time="2025-01-29T11:58:40.655897247Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 29 11:58:40.681355 containerd[2099]: time="2025-01-29T11:58:40.681302019Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 11:58:41.220282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount866969385.mount: Deactivated successfully. Jan 29 11:58:42.306855 containerd[2099]: time="2025-01-29T11:58:42.306796512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:42.308299 containerd[2099]: time="2025-01-29T11:58:42.308258453Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 29 11:58:42.311287 containerd[2099]: time="2025-01-29T11:58:42.311011480Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:42.315756 containerd[2099]: time="2025-01-29T11:58:42.315689161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:42.317624 containerd[2099]: time="2025-01-29T11:58:42.316912010Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.635296527s" Jan 29 11:58:42.317624 containerd[2099]: time="2025-01-29T11:58:42.316956248Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 11:58:42.344862 containerd[2099]: time="2025-01-29T11:58:42.344826889Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 29 11:58:42.818451 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4192428518.mount: Deactivated successfully. Jan 29 11:58:42.824529 containerd[2099]: time="2025-01-29T11:58:42.824482639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:42.825639 containerd[2099]: time="2025-01-29T11:58:42.825485394Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 29 11:58:42.828236 containerd[2099]: time="2025-01-29T11:58:42.826919057Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:42.829483 containerd[2099]: time="2025-01-29T11:58:42.829446953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:42.831195 containerd[2099]: time="2025-01-29T11:58:42.830348290Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 485.48239ms" Jan 29 11:58:42.831195 containerd[2099]: time="2025-01-29T11:58:42.830386323Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 29 11:58:42.856255 containerd[2099]: time="2025-01-29T11:58:42.855553441Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 29 11:58:43.376181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3455879123.mount: Deactivated successfully. Jan 29 11:58:45.685808 containerd[2099]: time="2025-01-29T11:58:45.685748372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:45.687323 containerd[2099]: time="2025-01-29T11:58:45.687251582Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 29 11:58:45.690344 containerd[2099]: time="2025-01-29T11:58:45.688541345Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:45.692253 containerd[2099]: time="2025-01-29T11:58:45.692216946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:45.696098 containerd[2099]: time="2025-01-29T11:58:45.696056942Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.840459412s" Jan 29 11:58:45.696394 containerd[2099]: time="2025-01-29T11:58:45.696353496Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 29 11:58:49.357677 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:58:49.363639 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:58:49.411157 systemd[1]: Reloading requested from client PID 2883 ('systemctl') (unit session-7.scope)... Jan 29 11:58:49.411177 systemd[1]: Reloading... Jan 29 11:58:49.555396 zram_generator::config[2924]: No configuration found. Jan 29 11:58:49.761357 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:58:49.856551 systemd[1]: Reloading finished in 444 ms. Jan 29 11:58:49.909765 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 11:58:49.909872 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 11:58:49.910285 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:58:49.926168 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:58:50.161594 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:58:50.162067 (kubelet)[2994]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:58:50.233529 kubelet[2994]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:58:50.234006 kubelet[2994]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:58:50.234049 kubelet[2994]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:58:50.234195 kubelet[2994]: I0129 11:58:50.234166 2994 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:58:50.567951 kubelet[2994]: I0129 11:58:50.567910 2994 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 11:58:50.567951 kubelet[2994]: I0129 11:58:50.567942 2994 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:58:50.569127 kubelet[2994]: I0129 11:58:50.569093 2994 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 11:58:50.617524 kubelet[2994]: I0129 11:58:50.617484 2994 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:58:50.623029 kubelet[2994]: E0129 11:58:50.622868 2994 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.24.218:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.24.218:6443: connect: connection refused Jan 29 11:58:50.639507 kubelet[2994]: I0129 11:58:50.639300 2994 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:58:50.655393 kubelet[2994]: I0129 11:58:50.655333 2994 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:58:50.660925 kubelet[2994]: I0129 11:58:50.655392 2994 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-218","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 11:58:50.661714 kubelet[2994]: I0129 11:58:50.661686 2994 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:58:50.661797 kubelet[2994]: I0129 11:58:50.661720 2994 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 11:58:50.667140 kubelet[2994]: I0129 11:58:50.667098 2994 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:58:50.668718 kubelet[2994]: W0129 11:58:50.668661 2994 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.24.218:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-218&limit=500&resourceVersion=0": dial tcp 172.31.24.218:6443: connect: connection refused Jan 29 11:58:50.668812 kubelet[2994]: E0129 11:58:50.668725 2994 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.24.218:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-218&limit=500&resourceVersion=0": dial tcp 172.31.24.218:6443: connect: connection refused Jan 29 11:58:50.670192 kubelet[2994]: I0129 11:58:50.670170 2994 kubelet.go:400] "Attempting to sync node with API server" Jan 29 11:58:50.670281 kubelet[2994]: I0129 11:58:50.670202 2994 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:58:50.671138 kubelet[2994]: I0129 11:58:50.671085 2994 kubelet.go:312] "Adding apiserver pod source" Jan 29 11:58:50.673509 kubelet[2994]: I0129 11:58:50.673264 2994 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:58:50.678217 kubelet[2994]: W0129 11:58:50.677808 2994 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.24.218:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.218:6443: connect: connection refused Jan 29 11:58:50.678217 kubelet[2994]: E0129 11:58:50.677870 2994 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.24.218:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.218:6443: connect: connection refused Jan 29 11:58:50.678509 kubelet[2994]: I0129 11:58:50.678485 2994 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 11:58:50.680542 kubelet[2994]: I0129 11:58:50.680516 2994 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:58:50.680626 kubelet[2994]: W0129 11:58:50.680594 2994 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:58:50.681928 kubelet[2994]: I0129 11:58:50.681296 2994 server.go:1264] "Started kubelet" Jan 29 11:58:50.685872 kubelet[2994]: I0129 11:58:50.685041 2994 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:58:50.695008 kubelet[2994]: I0129 11:58:50.694394 2994 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:58:50.695008 kubelet[2994]: I0129 11:58:50.694453 2994 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 11:58:50.696057 kubelet[2994]: I0129 11:58:50.696038 2994 server.go:455] "Adding debug handlers to kubelet server" Jan 29 11:58:50.697537 kubelet[2994]: I0129 11:58:50.697475 2994 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:58:50.697837 kubelet[2994]: I0129 11:58:50.697810 2994 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:58:50.701635 kubelet[2994]: I0129 11:58:50.700435 2994 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:58:50.701635 kubelet[2994]: I0129 11:58:50.700503 2994 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:58:50.712562 kubelet[2994]: E0129 11:58:50.712520 2994 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.218:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-218?timeout=10s\": dial tcp 172.31.24.218:6443: connect: connection refused" interval="200ms" Jan 29 11:58:50.712711 kubelet[2994]: W0129 11:58:50.712647 2994 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.24.218:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.218:6443: connect: connection refused Jan 29 11:58:50.712711 kubelet[2994]: E0129 11:58:50.712702 2994 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.24.218:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.218:6443: connect: connection refused Jan 29 11:58:50.712970 kubelet[2994]: I0129 11:58:50.712910 2994 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:58:50.713142 kubelet[2994]: I0129 11:58:50.713005 2994 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:58:50.714862 kubelet[2994]: E0129 11:58:50.713267 2994 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.218:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.218:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-218.181f27f8c9ea9f6f default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-218,UID:ip-172-31-24-218,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-218,},FirstTimestamp:2025-01-29 11:58:50.681270127 +0000 UTC m=+0.513010421,LastTimestamp:2025-01-29 11:58:50.681270127 +0000 UTC m=+0.513010421,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-218,}" Jan 29 11:58:50.715198 kubelet[2994]: I0129 11:58:50.715168 2994 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:58:50.718469 kubelet[2994]: I0129 11:58:50.718446 2994 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:58:50.718912 kubelet[2994]: I0129 11:58:50.718567 2994 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:58:50.718912 kubelet[2994]: I0129 11:58:50.718619 2994 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 11:58:50.718912 kubelet[2994]: E0129 11:58:50.718667 2994 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:58:50.727591 kubelet[2994]: W0129 11:58:50.727543 2994 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.24.218:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.218:6443: connect: connection refused Jan 29 11:58:50.727591 kubelet[2994]: E0129 11:58:50.727594 2994 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.24.218:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.218:6443: connect: connection refused Jan 29 11:58:50.728381 kubelet[2994]: I0129 11:58:50.728234 2994 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:58:50.743864 kubelet[2994]: E0129 11:58:50.743090 2994 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:58:50.757200 kubelet[2994]: I0129 11:58:50.757168 2994 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:58:50.757200 kubelet[2994]: I0129 11:58:50.757185 2994 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:58:50.757200 kubelet[2994]: I0129 11:58:50.757204 2994 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:58:50.758851 kubelet[2994]: I0129 11:58:50.758827 2994 policy_none.go:49] "None policy: Start" Jan 29 11:58:50.759442 kubelet[2994]: I0129 11:58:50.759423 2994 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:58:50.759523 kubelet[2994]: I0129 11:58:50.759460 2994 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:58:50.767339 kubelet[2994]: I0129 11:58:50.765387 2994 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:58:50.767339 kubelet[2994]: I0129 11:58:50.765610 2994 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:58:50.767339 kubelet[2994]: I0129 11:58:50.765737 2994 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:58:50.768695 kubelet[2994]: E0129 11:58:50.768681 2994 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-24-218\" not found" Jan 29 11:58:50.797057 kubelet[2994]: I0129 11:58:50.797029 2994 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-218" Jan 29 11:58:50.797538 kubelet[2994]: E0129 11:58:50.797509 2994 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.24.218:6443/api/v1/nodes\": dial tcp 172.31.24.218:6443: connect: connection refused" node="ip-172-31-24-218" Jan 29 11:58:50.820363 kubelet[2994]: I0129 11:58:50.819125 2994 topology_manager.go:215] "Topology Admit Handler" podUID="69394ca13187f3d2908b902d50903483" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-24-218" Jan 29 11:58:50.822790 kubelet[2994]: I0129 11:58:50.822760 2994 topology_manager.go:215] "Topology Admit Handler" podUID="c5f481308650f5f5effd8cc4ef6c56c2" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-24-218" Jan 29 11:58:50.824646 kubelet[2994]: I0129 11:58:50.824359 2994 topology_manager.go:215] "Topology Admit Handler" podUID="57af7b39f84e5ebe65c31e8df4841cbb" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-24-218" Jan 29 11:58:50.913907 kubelet[2994]: E0129 11:58:50.913862 2994 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.218:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-218?timeout=10s\": dial tcp 172.31.24.218:6443: connect: connection refused" interval="400ms" Jan 29 11:58:50.999837 kubelet[2994]: I0129 11:58:50.999805 2994 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-218" Jan 29 11:58:51.000200 kubelet[2994]: E0129 11:58:51.000158 2994 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.24.218:6443/api/v1/nodes\": dial tcp 172.31.24.218:6443: connect: connection refused" node="ip-172-31-24-218" Jan 29 11:58:51.001383 kubelet[2994]: I0129 11:58:51.001361 2994 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/69394ca13187f3d2908b902d50903483-ca-certs\") pod \"kube-apiserver-ip-172-31-24-218\" (UID: \"69394ca13187f3d2908b902d50903483\") " pod="kube-system/kube-apiserver-ip-172-31-24-218" Jan 29 11:58:51.001476 kubelet[2994]: I0129 11:58:51.001393 2994 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/69394ca13187f3d2908b902d50903483-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-218\" (UID: \"69394ca13187f3d2908b902d50903483\") " pod="kube-system/kube-apiserver-ip-172-31-24-218" Jan 29 11:58:51.001476 kubelet[2994]: I0129 11:58:51.001422 2994 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c5f481308650f5f5effd8cc4ef6c56c2-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-218\" (UID: \"c5f481308650f5f5effd8cc4ef6c56c2\") " pod="kube-system/kube-controller-manager-ip-172-31-24-218" Jan 29 11:58:51.001476 kubelet[2994]: I0129 11:58:51.001445 2994 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c5f481308650f5f5effd8cc4ef6c56c2-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-218\" (UID: \"c5f481308650f5f5effd8cc4ef6c56c2\") " pod="kube-system/kube-controller-manager-ip-172-31-24-218" Jan 29 11:58:51.001476 kubelet[2994]: I0129 11:58:51.001470 2994 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c5f481308650f5f5effd8cc4ef6c56c2-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-218\" (UID: \"c5f481308650f5f5effd8cc4ef6c56c2\") " pod="kube-system/kube-controller-manager-ip-172-31-24-218" Jan 29 11:58:51.001785 kubelet[2994]: I0129 11:58:51.001497 2994 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/57af7b39f84e5ebe65c31e8df4841cbb-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-218\" (UID: \"57af7b39f84e5ebe65c31e8df4841cbb\") " pod="kube-system/kube-scheduler-ip-172-31-24-218" Jan 29 11:58:51.001785 kubelet[2994]: I0129 11:58:51.001521 2994 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/69394ca13187f3d2908b902d50903483-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-218\" (UID: \"69394ca13187f3d2908b902d50903483\") " pod="kube-system/kube-apiserver-ip-172-31-24-218" Jan 29 11:58:51.001785 kubelet[2994]: I0129 11:58:51.001545 2994 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c5f481308650f5f5effd8cc4ef6c56c2-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-218\" (UID: \"c5f481308650f5f5effd8cc4ef6c56c2\") " pod="kube-system/kube-controller-manager-ip-172-31-24-218" Jan 29 11:58:51.001785 kubelet[2994]: I0129 11:58:51.001570 2994 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c5f481308650f5f5effd8cc4ef6c56c2-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-218\" (UID: \"c5f481308650f5f5effd8cc4ef6c56c2\") " pod="kube-system/kube-controller-manager-ip-172-31-24-218" Jan 29 11:58:51.131075 containerd[2099]: time="2025-01-29T11:58:51.130921096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-218,Uid:69394ca13187f3d2908b902d50903483,Namespace:kube-system,Attempt:0,}" Jan 29 11:58:51.133039 containerd[2099]: time="2025-01-29T11:58:51.132635730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-218,Uid:57af7b39f84e5ebe65c31e8df4841cbb,Namespace:kube-system,Attempt:0,}" Jan 29 11:58:51.133505 containerd[2099]: time="2025-01-29T11:58:51.133476522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-218,Uid:c5f481308650f5f5effd8cc4ef6c56c2,Namespace:kube-system,Attempt:0,}" Jan 29 11:58:51.325674 kubelet[2994]: E0129 11:58:51.325617 2994 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.218:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-218?timeout=10s\": dial tcp 172.31.24.218:6443: connect: connection refused" interval="800ms" Jan 29 11:58:51.402766 kubelet[2994]: I0129 11:58:51.402668 2994 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-218" Jan 29 11:58:51.403142 kubelet[2994]: E0129 11:58:51.403083 2994 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.24.218:6443/api/v1/nodes\": dial tcp 172.31.24.218:6443: connect: connection refused" node="ip-172-31-24-218" Jan 29 11:58:51.544227 kubelet[2994]: W0129 11:58:51.544168 2994 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.24.218:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.218:6443: connect: connection refused Jan 29 11:58:51.544227 kubelet[2994]: E0129 11:58:51.544234 2994 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.24.218:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.218:6443: connect: connection refused Jan 29 11:58:51.621255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4246815515.mount: Deactivated successfully. Jan 29 11:58:51.631044 containerd[2099]: time="2025-01-29T11:58:51.630990516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:58:51.632235 containerd[2099]: time="2025-01-29T11:58:51.632017558Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 11:58:51.633402 containerd[2099]: time="2025-01-29T11:58:51.633369855Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:58:51.634390 containerd[2099]: time="2025-01-29T11:58:51.634362165Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:58:51.635226 containerd[2099]: time="2025-01-29T11:58:51.635178041Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:58:51.636506 containerd[2099]: time="2025-01-29T11:58:51.636478143Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:58:51.637224 containerd[2099]: time="2025-01-29T11:58:51.637132635Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:58:51.640019 containerd[2099]: time="2025-01-29T11:58:51.639987449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:58:51.642621 containerd[2099]: time="2025-01-29T11:58:51.641200584Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 507.587866ms" Jan 29 11:58:51.642621 containerd[2099]: time="2025-01-29T11:58:51.642523958Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 511.466661ms" Jan 29 11:58:51.645121 containerd[2099]: time="2025-01-29T11:58:51.645087656Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 512.384866ms" Jan 29 11:58:51.783803 kubelet[2994]: W0129 11:58:51.783738 2994 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.24.218:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.218:6443: connect: connection refused Jan 29 11:58:51.783803 kubelet[2994]: E0129 11:58:51.783801 2994 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.24.218:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.218:6443: connect: connection refused Jan 29 11:58:51.827285 containerd[2099]: time="2025-01-29T11:58:51.826620900Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:58:51.827285 containerd[2099]: time="2025-01-29T11:58:51.826712349Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:58:51.827285 containerd[2099]: time="2025-01-29T11:58:51.826737295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:58:51.827285 containerd[2099]: time="2025-01-29T11:58:51.826842209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:58:51.835918 containerd[2099]: time="2025-01-29T11:58:51.835525840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:58:51.835918 containerd[2099]: time="2025-01-29T11:58:51.835607973Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:58:51.835918 containerd[2099]: time="2025-01-29T11:58:51.835628046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:58:51.836684 containerd[2099]: time="2025-01-29T11:58:51.836167377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:58:51.847357 containerd[2099]: time="2025-01-29T11:58:51.846052430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:58:51.847357 containerd[2099]: time="2025-01-29T11:58:51.846119000Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:58:51.847357 containerd[2099]: time="2025-01-29T11:58:51.846151457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:58:51.847357 containerd[2099]: time="2025-01-29T11:58:51.846280751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:58:51.969183 containerd[2099]: time="2025-01-29T11:58:51.968773257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-218,Uid:c5f481308650f5f5effd8cc4ef6c56c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f51fe7024701af882d989d533cd3a05556fe7065637aba5b07d15a22a70825a\"" Jan 29 11:58:51.983913 containerd[2099]: time="2025-01-29T11:58:51.983053628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-218,Uid:57af7b39f84e5ebe65c31e8df4841cbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab6c616e886ca451700fa7d7ec503eb794f4090cb790e8a1981b589272a8a89c\"" Jan 29 11:58:51.988598 containerd[2099]: time="2025-01-29T11:58:51.987184595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-218,Uid:69394ca13187f3d2908b902d50903483,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b397ce1b23fc65f0c2c64788c92c285fb3fa522f557c8bdf43a0e79002ac07d\"" Jan 29 11:58:51.988598 containerd[2099]: time="2025-01-29T11:58:51.988297068Z" level=info msg="CreateContainer within sandbox \"ab6c616e886ca451700fa7d7ec503eb794f4090cb790e8a1981b589272a8a89c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 11:58:51.989694 containerd[2099]: time="2025-01-29T11:58:51.989663242Z" level=info msg="CreateContainer within sandbox \"3f51fe7024701af882d989d533cd3a05556fe7065637aba5b07d15a22a70825a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 11:58:51.991391 containerd[2099]: time="2025-01-29T11:58:51.991349441Z" level=info msg="CreateContainer within sandbox \"1b397ce1b23fc65f0c2c64788c92c285fb3fa522f557c8bdf43a0e79002ac07d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 11:58:52.019729 containerd[2099]: time="2025-01-29T11:58:52.019689022Z" level=info msg="CreateContainer within sandbox \"3f51fe7024701af882d989d533cd3a05556fe7065637aba5b07d15a22a70825a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"72e553649f8eea9e33a60a15c8c5dcf487d68a8a3a12fd9699c2687051ef02b9\"" Jan 29 11:58:52.020545 containerd[2099]: time="2025-01-29T11:58:52.020520825Z" level=info msg="StartContainer for \"72e553649f8eea9e33a60a15c8c5dcf487d68a8a3a12fd9699c2687051ef02b9\"" Jan 29 11:58:52.028478 containerd[2099]: time="2025-01-29T11:58:52.028401227Z" level=info msg="CreateContainer within sandbox \"ab6c616e886ca451700fa7d7ec503eb794f4090cb790e8a1981b589272a8a89c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6bac1147184f4270e96bb18f005779bfea829a27881c9277470b0cfb255f3c06\"" Jan 29 11:58:52.030612 containerd[2099]: time="2025-01-29T11:58:52.030577522Z" level=info msg="StartContainer for \"6bac1147184f4270e96bb18f005779bfea829a27881c9277470b0cfb255f3c06\"" Jan 29 11:58:52.031781 containerd[2099]: time="2025-01-29T11:58:52.031624826Z" level=info msg="CreateContainer within sandbox \"1b397ce1b23fc65f0c2c64788c92c285fb3fa522f557c8bdf43a0e79002ac07d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0e35ac36eedab88c6ae66893474bb07ab37c33184ebc7bdd5cc39d942adc8642\"" Jan 29 11:58:52.032387 containerd[2099]: time="2025-01-29T11:58:52.032061413Z" level=info msg="StartContainer for \"0e35ac36eedab88c6ae66893474bb07ab37c33184ebc7bdd5cc39d942adc8642\"" Jan 29 11:58:52.104228 kubelet[2994]: W0129 11:58:52.104065 2994 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.24.218:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-218&limit=500&resourceVersion=0": dial tcp 172.31.24.218:6443: connect: connection refused Jan 29 11:58:52.105401 kubelet[2994]: E0129 11:58:52.105360 2994 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.24.218:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-218&limit=500&resourceVersion=0": dial tcp 172.31.24.218:6443: connect: connection refused Jan 29 11:58:52.127723 kubelet[2994]: E0129 11:58:52.127644 2994 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.218:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-218?timeout=10s\": dial tcp 172.31.24.218:6443: connect: connection refused" interval="1.6s" Jan 29 11:58:52.163976 containerd[2099]: time="2025-01-29T11:58:52.163912419Z" level=info msg="StartContainer for \"72e553649f8eea9e33a60a15c8c5dcf487d68a8a3a12fd9699c2687051ef02b9\" returns successfully" Jan 29 11:58:52.186935 containerd[2099]: time="2025-01-29T11:58:52.186349710Z" level=info msg="StartContainer for \"0e35ac36eedab88c6ae66893474bb07ab37c33184ebc7bdd5cc39d942adc8642\" returns successfully" Jan 29 11:58:52.211364 kubelet[2994]: I0129 11:58:52.209705 2994 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-218" Jan 29 11:58:52.211577 kubelet[2994]: E0129 11:58:52.211546 2994 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.24.218:6443/api/v1/nodes\": dial tcp 172.31.24.218:6443: connect: connection refused" node="ip-172-31-24-218" Jan 29 11:58:52.216855 kubelet[2994]: W0129 11:58:52.216805 2994 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.24.218:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.218:6443: connect: connection refused Jan 29 11:58:52.216855 kubelet[2994]: E0129 11:58:52.216863 2994 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.24.218:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.218:6443: connect: connection refused Jan 29 11:58:52.217030 containerd[2099]: time="2025-01-29T11:58:52.216955892Z" level=info msg="StartContainer for \"6bac1147184f4270e96bb18f005779bfea829a27881c9277470b0cfb255f3c06\" returns successfully" Jan 29 11:58:52.702103 kubelet[2994]: E0129 11:58:52.701563 2994 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.24.218:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.24.218:6443: connect: connection refused Jan 29 11:58:53.816359 kubelet[2994]: I0129 11:58:53.815722 2994 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-218" Jan 29 11:58:54.624816 kubelet[2994]: E0129 11:58:54.624683 2994 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-24-218\" not found" node="ip-172-31-24-218" Jan 29 11:58:54.705115 kubelet[2994]: I0129 11:58:54.704935 2994 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-24-218" Jan 29 11:58:54.749941 kubelet[2994]: E0129 11:58:54.749905 2994 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-24-218\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-24-218" Jan 29 11:58:55.679645 kubelet[2994]: I0129 11:58:55.679557 2994 apiserver.go:52] "Watching apiserver" Jan 29 11:58:55.700956 kubelet[2994]: I0129 11:58:55.700579 2994 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:58:55.963583 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 29 11:58:57.141496 systemd[1]: Reloading requested from client PID 3271 ('systemctl') (unit session-7.scope)... Jan 29 11:58:57.141515 systemd[1]: Reloading... Jan 29 11:58:57.322374 zram_generator::config[3311]: No configuration found. Jan 29 11:58:57.499504 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:58:57.623218 systemd[1]: Reloading finished in 480 ms. Jan 29 11:58:57.662390 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:58:57.662860 kubelet[2994]: I0129 11:58:57.662837 2994 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:58:57.681395 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:58:57.681814 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:58:57.688793 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:58:57.946555 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:58:57.954827 (kubelet)[3378]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:58:58.048565 kubelet[3378]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:58:58.049342 kubelet[3378]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:58:58.049342 kubelet[3378]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:58:58.049342 kubelet[3378]: I0129 11:58:58.049005 3378 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:58:58.055665 kubelet[3378]: I0129 11:58:58.055631 3378 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 11:58:58.055665 kubelet[3378]: I0129 11:58:58.055656 3378 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:58:58.055930 kubelet[3378]: I0129 11:58:58.055909 3378 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 11:58:58.060103 kubelet[3378]: I0129 11:58:58.060029 3378 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 11:58:58.062098 kubelet[3378]: I0129 11:58:58.062064 3378 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:58:58.086615 kubelet[3378]: I0129 11:58:58.085559 3378 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:58:58.086615 kubelet[3378]: I0129 11:58:58.086352 3378 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:58:58.086855 kubelet[3378]: I0129 11:58:58.086391 3378 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-218","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 11:58:58.086855 kubelet[3378]: I0129 11:58:58.086689 3378 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:58:58.086855 kubelet[3378]: I0129 11:58:58.086707 3378 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 11:58:58.091186 kubelet[3378]: I0129 11:58:58.089834 3378 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:58:58.091186 kubelet[3378]: I0129 11:58:58.090211 3378 kubelet.go:400] "Attempting to sync node with API server" Jan 29 11:58:58.091186 kubelet[3378]: I0129 11:58:58.090231 3378 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:58:58.091186 kubelet[3378]: I0129 11:58:58.090602 3378 kubelet.go:312] "Adding apiserver pod source" Jan 29 11:58:58.091186 kubelet[3378]: I0129 11:58:58.090630 3378 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:58:58.110544 kubelet[3378]: I0129 11:58:58.107677 3378 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 11:58:58.110544 kubelet[3378]: I0129 11:58:58.107904 3378 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:58:58.114563 kubelet[3378]: I0129 11:58:58.114192 3378 server.go:1264] "Started kubelet" Jan 29 11:58:58.122065 kubelet[3378]: I0129 11:58:58.122002 3378 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:58:58.126403 kubelet[3378]: I0129 11:58:58.126377 3378 server.go:455] "Adding debug handlers to kubelet server" Jan 29 11:58:58.131857 kubelet[3378]: I0129 11:58:58.131790 3378 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:58:58.134976 kubelet[3378]: I0129 11:58:58.132228 3378 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:58:58.134976 kubelet[3378]: I0129 11:58:58.132496 3378 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:58:58.154841 kubelet[3378]: I0129 11:58:58.154811 3378 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 11:58:58.164634 kubelet[3378]: I0129 11:58:58.155168 3378 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:58:58.164634 kubelet[3378]: I0129 11:58:58.163605 3378 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:58:58.180173 kubelet[3378]: I0129 11:58:58.179538 3378 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:58:58.180173 kubelet[3378]: I0129 11:58:58.179721 3378 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:58:58.181870 sudo[3394]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 11:58:58.183532 sudo[3394]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 11:58:58.187351 kubelet[3378]: I0129 11:58:58.185957 3378 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:58:58.209386 kubelet[3378]: E0129 11:58:58.206995 3378 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:58:58.217413 kubelet[3378]: I0129 11:58:58.217374 3378 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:58:58.219831 kubelet[3378]: I0129 11:58:58.219706 3378 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:58:58.227217 kubelet[3378]: I0129 11:58:58.224363 3378 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:58:58.227217 kubelet[3378]: I0129 11:58:58.224412 3378 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 11:58:58.227217 kubelet[3378]: E0129 11:58:58.224464 3378 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:58:58.271098 kubelet[3378]: I0129 11:58:58.269080 3378 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-218" Jan 29 11:58:58.307069 kubelet[3378]: I0129 11:58:58.307030 3378 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-24-218" Jan 29 11:58:58.307207 kubelet[3378]: I0129 11:58:58.307114 3378 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-24-218" Jan 29 11:58:58.333267 kubelet[3378]: E0129 11:58:58.333231 3378 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:58:58.360519 kubelet[3378]: I0129 11:58:58.360489 3378 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:58:58.360519 kubelet[3378]: I0129 11:58:58.360510 3378 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:58:58.360519 kubelet[3378]: I0129 11:58:58.360530 3378 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:58:58.360745 kubelet[3378]: I0129 11:58:58.360693 3378 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 11:58:58.360745 kubelet[3378]: I0129 11:58:58.360705 3378 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 11:58:58.360745 kubelet[3378]: I0129 11:58:58.360731 3378 policy_none.go:49] "None policy: Start" Jan 29 11:58:58.364252 kubelet[3378]: I0129 11:58:58.363745 3378 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:58:58.364252 kubelet[3378]: I0129 11:58:58.363775 3378 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:58:58.365468 kubelet[3378]: I0129 11:58:58.365348 3378 state_mem.go:75] "Updated machine memory state" Jan 29 11:58:58.367142 kubelet[3378]: I0129 11:58:58.367124 3378 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:58:58.369826 kubelet[3378]: I0129 11:58:58.367501 3378 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:58:58.371627 kubelet[3378]: I0129 11:58:58.371610 3378 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:58:58.533945 kubelet[3378]: I0129 11:58:58.533835 3378 topology_manager.go:215] "Topology Admit Handler" podUID="69394ca13187f3d2908b902d50903483" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-24-218" Jan 29 11:58:58.539073 kubelet[3378]: I0129 11:58:58.538860 3378 topology_manager.go:215] "Topology Admit Handler" podUID="c5f481308650f5f5effd8cc4ef6c56c2" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-24-218" Jan 29 11:58:58.540580 kubelet[3378]: I0129 11:58:58.540550 3378 topology_manager.go:215] "Topology Admit Handler" podUID="57af7b39f84e5ebe65c31e8df4841cbb" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-24-218" Jan 29 11:58:58.566194 kubelet[3378]: I0129 11:58:58.566057 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c5f481308650f5f5effd8cc4ef6c56c2-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-218\" (UID: \"c5f481308650f5f5effd8cc4ef6c56c2\") " pod="kube-system/kube-controller-manager-ip-172-31-24-218" Jan 29 11:58:58.566450 kubelet[3378]: I0129 11:58:58.566425 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/57af7b39f84e5ebe65c31e8df4841cbb-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-218\" (UID: \"57af7b39f84e5ebe65c31e8df4841cbb\") " pod="kube-system/kube-scheduler-ip-172-31-24-218" Jan 29 11:58:58.566610 kubelet[3378]: I0129 11:58:58.566596 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/69394ca13187f3d2908b902d50903483-ca-certs\") pod \"kube-apiserver-ip-172-31-24-218\" (UID: \"69394ca13187f3d2908b902d50903483\") " pod="kube-system/kube-apiserver-ip-172-31-24-218" Jan 29 11:58:58.567094 kubelet[3378]: I0129 11:58:58.566837 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/69394ca13187f3d2908b902d50903483-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-218\" (UID: \"69394ca13187f3d2908b902d50903483\") " pod="kube-system/kube-apiserver-ip-172-31-24-218" Jan 29 11:58:58.567094 kubelet[3378]: I0129 11:58:58.566873 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/69394ca13187f3d2908b902d50903483-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-218\" (UID: \"69394ca13187f3d2908b902d50903483\") " pod="kube-system/kube-apiserver-ip-172-31-24-218" Jan 29 11:58:58.567094 kubelet[3378]: I0129 11:58:58.566908 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c5f481308650f5f5effd8cc4ef6c56c2-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-218\" (UID: \"c5f481308650f5f5effd8cc4ef6c56c2\") " pod="kube-system/kube-controller-manager-ip-172-31-24-218" Jan 29 11:58:58.567094 kubelet[3378]: I0129 11:58:58.566939 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c5f481308650f5f5effd8cc4ef6c56c2-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-218\" (UID: \"c5f481308650f5f5effd8cc4ef6c56c2\") " pod="kube-system/kube-controller-manager-ip-172-31-24-218" Jan 29 11:58:58.567094 kubelet[3378]: I0129 11:58:58.566965 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c5f481308650f5f5effd8cc4ef6c56c2-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-218\" (UID: \"c5f481308650f5f5effd8cc4ef6c56c2\") " pod="kube-system/kube-controller-manager-ip-172-31-24-218" Jan 29 11:58:58.567411 kubelet[3378]: I0129 11:58:58.566991 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c5f481308650f5f5effd8cc4ef6c56c2-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-218\" (UID: \"c5f481308650f5f5effd8cc4ef6c56c2\") " pod="kube-system/kube-controller-manager-ip-172-31-24-218" Jan 29 11:58:59.098169 kubelet[3378]: I0129 11:58:59.095823 3378 apiserver.go:52] "Watching apiserver" Jan 29 11:58:59.157421 sudo[3394]: pam_unix(sudo:session): session closed for user root Jan 29 11:58:59.164602 kubelet[3378]: I0129 11:58:59.164530 3378 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:58:59.276391 kubelet[3378]: E0129 11:58:59.276352 3378 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-24-218\" already exists" pod="kube-system/kube-apiserver-ip-172-31-24-218" Jan 29 11:58:59.310330 kubelet[3378]: I0129 11:58:59.309191 3378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-24-218" podStartSLOduration=1.309158928 podStartE2EDuration="1.309158928s" podCreationTimestamp="2025-01-29 11:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:58:59.307981452 +0000 UTC m=+1.343983491" watchObservedRunningTime="2025-01-29 11:58:59.309158928 +0000 UTC m=+1.345160962" Jan 29 11:58:59.322563 kubelet[3378]: I0129 11:58:59.322366 3378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-24-218" podStartSLOduration=1.322346159 podStartE2EDuration="1.322346159s" podCreationTimestamp="2025-01-29 11:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:58:59.320627996 +0000 UTC m=+1.356630030" watchObservedRunningTime="2025-01-29 11:58:59.322346159 +0000 UTC m=+1.358348190" Jan 29 11:58:59.335569 kubelet[3378]: I0129 11:58:59.335501 3378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-24-218" podStartSLOduration=1.335468159 podStartE2EDuration="1.335468159s" podCreationTimestamp="2025-01-29 11:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:58:59.335222888 +0000 UTC m=+1.371224922" watchObservedRunningTime="2025-01-29 11:58:59.335468159 +0000 UTC m=+1.371470192" Jan 29 11:59:01.018418 sudo[2447]: pam_unix(sudo:session): session closed for user root Jan 29 11:59:01.046121 sshd[2443]: pam_unix(sshd:session): session closed for user core Jan 29 11:59:01.055983 systemd[1]: sshd@6-172.31.24.218:22-139.178.68.195:58070.service: Deactivated successfully. Jan 29 11:59:01.056898 systemd-logind[2069]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:59:01.074678 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:59:01.077470 systemd-logind[2069]: Removed session 7. Jan 29 11:59:10.061074 update_engine[2074]: I20250129 11:59:10.058355 2074 update_attempter.cc:509] Updating boot flags... Jan 29 11:59:10.172485 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3461) Jan 29 11:59:11.591492 kubelet[3378]: I0129 11:59:11.591456 3378 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 11:59:11.596476 containerd[2099]: time="2025-01-29T11:59:11.596432001Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:59:11.599645 kubelet[3378]: I0129 11:59:11.599621 3378 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 11:59:12.444934 kubelet[3378]: I0129 11:59:12.444872 3378 topology_manager.go:215] "Topology Admit Handler" podUID="3c1d7a73-2b23-4ead-89d2-421caad46581" podNamespace="kube-system" podName="kube-proxy-vvnlb" Jan 29 11:59:12.459914 kubelet[3378]: I0129 11:59:12.459867 3378 topology_manager.go:215] "Topology Admit Handler" podUID="76a569f5-e2e9-49c0-a8e1-9d80860a0223" podNamespace="kube-system" podName="cilium-p66n5" Jan 29 11:59:12.471807 kubelet[3378]: I0129 11:59:12.470866 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58rqx\" (UniqueName: \"kubernetes.io/projected/3c1d7a73-2b23-4ead-89d2-421caad46581-kube-api-access-58rqx\") pod \"kube-proxy-vvnlb\" (UID: \"3c1d7a73-2b23-4ead-89d2-421caad46581\") " pod="kube-system/kube-proxy-vvnlb" Jan 29 11:59:12.471807 kubelet[3378]: I0129 11:59:12.470911 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c1d7a73-2b23-4ead-89d2-421caad46581-xtables-lock\") pod \"kube-proxy-vvnlb\" (UID: \"3c1d7a73-2b23-4ead-89d2-421caad46581\") " pod="kube-system/kube-proxy-vvnlb" Jan 29 11:59:12.471807 kubelet[3378]: I0129 11:59:12.470944 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3c1d7a73-2b23-4ead-89d2-421caad46581-kube-proxy\") pod \"kube-proxy-vvnlb\" (UID: \"3c1d7a73-2b23-4ead-89d2-421caad46581\") " pod="kube-system/kube-proxy-vvnlb" Jan 29 11:59:12.471807 kubelet[3378]: I0129 11:59:12.470967 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c1d7a73-2b23-4ead-89d2-421caad46581-lib-modules\") pod \"kube-proxy-vvnlb\" (UID: \"3c1d7a73-2b23-4ead-89d2-421caad46581\") " pod="kube-system/kube-proxy-vvnlb" Jan 29 11:59:12.515990 kubelet[3378]: I0129 11:59:12.515548 3378 topology_manager.go:215] "Topology Admit Handler" podUID="cd0b4b26-c048-44be-a077-43b9f9d423ed" podNamespace="kube-system" podName="cilium-operator-599987898-z5ldf" Jan 29 11:59:12.571921 kubelet[3378]: I0129 11:59:12.571884 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/76a569f5-e2e9-49c0-a8e1-9d80860a0223-hostproc\") pod \"cilium-p66n5\" (UID: \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\") " pod="kube-system/cilium-p66n5" Jan 29 11:59:12.572582 kubelet[3378]: I0129 11:59:12.572564 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/76a569f5-e2e9-49c0-a8e1-9d80860a0223-hubble-tls\") pod \"cilium-p66n5\" (UID: \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\") " pod="kube-system/cilium-p66n5" Jan 29 11:59:12.572693 kubelet[3378]: I0129 11:59:12.572683 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/76a569f5-e2e9-49c0-a8e1-9d80860a0223-cni-path\") pod \"cilium-p66n5\" (UID: \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\") " pod="kube-system/cilium-p66n5" Jan 29 11:59:12.572763 kubelet[3378]: I0129 11:59:12.572752 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/76a569f5-e2e9-49c0-a8e1-9d80860a0223-lib-modules\") pod \"cilium-p66n5\" (UID: \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\") " pod="kube-system/cilium-p66n5" Jan 29 11:59:12.572837 kubelet[3378]: I0129 11:59:12.572830 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/76a569f5-e2e9-49c0-a8e1-9d80860a0223-bpf-maps\") pod \"cilium-p66n5\" (UID: \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\") " pod="kube-system/cilium-p66n5" Jan 29 11:59:12.572927 kubelet[3378]: I0129 11:59:12.572900 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/76a569f5-e2e9-49c0-a8e1-9d80860a0223-clustermesh-secrets\") pod \"cilium-p66n5\" (UID: \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\") " pod="kube-system/cilium-p66n5" Jan 29 11:59:12.573329 kubelet[3378]: I0129 11:59:12.572987 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4dvj\" (UniqueName: \"kubernetes.io/projected/76a569f5-e2e9-49c0-a8e1-9d80860a0223-kube-api-access-q4dvj\") pod \"cilium-p66n5\" (UID: \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\") " pod="kube-system/cilium-p66n5" Jan 29 11:59:12.573329 kubelet[3378]: I0129 11:59:12.573108 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cd0b4b26-c048-44be-a077-43b9f9d423ed-cilium-config-path\") pod \"cilium-operator-599987898-z5ldf\" (UID: \"cd0b4b26-c048-44be-a077-43b9f9d423ed\") " pod="kube-system/cilium-operator-599987898-z5ldf" Jan 29 11:59:12.573329 kubelet[3378]: I0129 11:59:12.573134 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/76a569f5-e2e9-49c0-a8e1-9d80860a0223-host-proc-sys-net\") pod \"cilium-p66n5\" (UID: \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\") " pod="kube-system/cilium-p66n5" Jan 29 11:59:12.573329 kubelet[3378]: I0129 11:59:12.573161 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/76a569f5-e2e9-49c0-a8e1-9d80860a0223-host-proc-sys-kernel\") pod \"cilium-p66n5\" (UID: \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\") " pod="kube-system/cilium-p66n5" Jan 29 11:59:12.573329 kubelet[3378]: I0129 11:59:12.573177 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/76a569f5-e2e9-49c0-a8e1-9d80860a0223-cilium-config-path\") pod \"cilium-p66n5\" (UID: \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\") " pod="kube-system/cilium-p66n5" Jan 29 11:59:12.573575 kubelet[3378]: I0129 11:59:12.573192 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmmdc\" (UniqueName: \"kubernetes.io/projected/cd0b4b26-c048-44be-a077-43b9f9d423ed-kube-api-access-fmmdc\") pod \"cilium-operator-599987898-z5ldf\" (UID: \"cd0b4b26-c048-44be-a077-43b9f9d423ed\") " pod="kube-system/cilium-operator-599987898-z5ldf" Jan 29 11:59:12.573575 kubelet[3378]: I0129 11:59:12.573206 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/76a569f5-e2e9-49c0-a8e1-9d80860a0223-etc-cni-netd\") pod \"cilium-p66n5\" (UID: \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\") " pod="kube-system/cilium-p66n5" Jan 29 11:59:12.573575 kubelet[3378]: I0129 11:59:12.573220 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/76a569f5-e2e9-49c0-a8e1-9d80860a0223-xtables-lock\") pod \"cilium-p66n5\" (UID: \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\") " pod="kube-system/cilium-p66n5" Jan 29 11:59:12.573575 kubelet[3378]: I0129 11:59:12.573245 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/76a569f5-e2e9-49c0-a8e1-9d80860a0223-cilium-run\") pod \"cilium-p66n5\" (UID: \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\") " pod="kube-system/cilium-p66n5" Jan 29 11:59:12.573575 kubelet[3378]: I0129 11:59:12.573263 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/76a569f5-e2e9-49c0-a8e1-9d80860a0223-cilium-cgroup\") pod \"cilium-p66n5\" (UID: \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\") " pod="kube-system/cilium-p66n5" Jan 29 11:59:12.751452 containerd[2099]: time="2025-01-29T11:59:12.751406562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vvnlb,Uid:3c1d7a73-2b23-4ead-89d2-421caad46581,Namespace:kube-system,Attempt:0,}" Jan 29 11:59:12.764251 containerd[2099]: time="2025-01-29T11:59:12.764194419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p66n5,Uid:76a569f5-e2e9-49c0-a8e1-9d80860a0223,Namespace:kube-system,Attempt:0,}" Jan 29 11:59:12.811380 containerd[2099]: time="2025-01-29T11:59:12.811235722Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:59:12.811380 containerd[2099]: time="2025-01-29T11:59:12.811285906Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:59:12.811380 containerd[2099]: time="2025-01-29T11:59:12.811302036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:59:12.813495 containerd[2099]: time="2025-01-29T11:59:12.812658127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:59:12.830078 containerd[2099]: time="2025-01-29T11:59:12.829719347Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:59:12.830078 containerd[2099]: time="2025-01-29T11:59:12.829822410Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:59:12.830078 containerd[2099]: time="2025-01-29T11:59:12.829843687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:59:12.830571 containerd[2099]: time="2025-01-29T11:59:12.830056520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:59:12.832164 containerd[2099]: time="2025-01-29T11:59:12.832118848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-z5ldf,Uid:cd0b4b26-c048-44be-a077-43b9f9d423ed,Namespace:kube-system,Attempt:0,}" Jan 29 11:59:12.893077 containerd[2099]: time="2025-01-29T11:59:12.893022240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vvnlb,Uid:3c1d7a73-2b23-4ead-89d2-421caad46581,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee6232462710750a09518cca49587c426b3bb0cc4aea85146234b20fc1270263\"" Jan 29 11:59:12.896821 containerd[2099]: time="2025-01-29T11:59:12.896715960Z" level=info msg="CreateContainer within sandbox \"ee6232462710750a09518cca49587c426b3bb0cc4aea85146234b20fc1270263\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:59:12.917486 containerd[2099]: time="2025-01-29T11:59:12.917442484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p66n5,Uid:76a569f5-e2e9-49c0-a8e1-9d80860a0223,Namespace:kube-system,Attempt:0,} returns sandbox id \"1afaa95969e1279a966f28d82f32623e600d23ad2b85640e9c5627bf3c7fd5ed\"" Jan 29 11:59:12.926521 containerd[2099]: time="2025-01-29T11:59:12.926419499Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 11:59:12.975422 containerd[2099]: time="2025-01-29T11:59:12.975098724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:59:12.975422 containerd[2099]: time="2025-01-29T11:59:12.975155394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:59:12.975422 containerd[2099]: time="2025-01-29T11:59:12.975189870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:59:12.975422 containerd[2099]: time="2025-01-29T11:59:12.975299855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:59:13.009548 containerd[2099]: time="2025-01-29T11:59:13.009439680Z" level=info msg="CreateContainer within sandbox \"ee6232462710750a09518cca49587c426b3bb0cc4aea85146234b20fc1270263\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b1a7ad4338ccad7c9284d4b954f981fb0aa356f4ed0c077f9fbfc0b528034b41\"" Jan 29 11:59:13.018360 containerd[2099]: time="2025-01-29T11:59:13.014731591Z" level=info msg="StartContainer for \"b1a7ad4338ccad7c9284d4b954f981fb0aa356f4ed0c077f9fbfc0b528034b41\"" Jan 29 11:59:13.065594 containerd[2099]: time="2025-01-29T11:59:13.065552125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-z5ldf,Uid:cd0b4b26-c048-44be-a077-43b9f9d423ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"339f17df78eff15909c00d4eeae5d9d5ff19a0bbfd3490c04fa4957b605dbdea\"" Jan 29 11:59:13.107510 containerd[2099]: time="2025-01-29T11:59:13.107476750Z" level=info msg="StartContainer for \"b1a7ad4338ccad7c9284d4b954f981fb0aa356f4ed0c077f9fbfc0b528034b41\" returns successfully" Jan 29 11:59:18.057848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount530126647.mount: Deactivated successfully. Jan 29 11:59:18.291244 kubelet[3378]: I0129 11:59:18.286206 3378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vvnlb" podStartSLOduration=6.286185604 podStartE2EDuration="6.286185604s" podCreationTimestamp="2025-01-29 11:59:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:59:13.326464852 +0000 UTC m=+15.362466885" watchObservedRunningTime="2025-01-29 11:59:18.286185604 +0000 UTC m=+20.322187637" Jan 29 11:59:20.719074 containerd[2099]: time="2025-01-29T11:59:20.719024382Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:59:20.722008 containerd[2099]: time="2025-01-29T11:59:20.721468650Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 29 11:59:20.722008 containerd[2099]: time="2025-01-29T11:59:20.721577609Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:59:20.723400 containerd[2099]: time="2025-01-29T11:59:20.723366795Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.796893094s" Jan 29 11:59:20.723985 containerd[2099]: time="2025-01-29T11:59:20.723515612Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 29 11:59:20.725125 containerd[2099]: time="2025-01-29T11:59:20.725096311Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 11:59:20.728327 containerd[2099]: time="2025-01-29T11:59:20.728225595Z" level=info msg="CreateContainer within sandbox \"1afaa95969e1279a966f28d82f32623e600d23ad2b85640e9c5627bf3c7fd5ed\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 11:59:20.824526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount777341295.mount: Deactivated successfully. Jan 29 11:59:20.852760 containerd[2099]: time="2025-01-29T11:59:20.852691537Z" level=info msg="CreateContainer within sandbox \"1afaa95969e1279a966f28d82f32623e600d23ad2b85640e9c5627bf3c7fd5ed\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"42c9848721a977e5b2e286c8adb48166c5f7d5da711e401a8a96bfc11ac06dee\"" Jan 29 11:59:20.855498 containerd[2099]: time="2025-01-29T11:59:20.854872749Z" level=info msg="StartContainer for \"42c9848721a977e5b2e286c8adb48166c5f7d5da711e401a8a96bfc11ac06dee\"" Jan 29 11:59:20.956889 systemd[1]: run-containerd-runc-k8s.io-42c9848721a977e5b2e286c8adb48166c5f7d5da711e401a8a96bfc11ac06dee-runc.KTOKBP.mount: Deactivated successfully. Jan 29 11:59:20.996787 containerd[2099]: time="2025-01-29T11:59:20.994243432Z" level=info msg="StartContainer for \"42c9848721a977e5b2e286c8adb48166c5f7d5da711e401a8a96bfc11ac06dee\" returns successfully" Jan 29 11:59:21.101737 containerd[2099]: time="2025-01-29T11:59:21.087803256Z" level=info msg="shim disconnected" id=42c9848721a977e5b2e286c8adb48166c5f7d5da711e401a8a96bfc11ac06dee namespace=k8s.io Jan 29 11:59:21.101986 containerd[2099]: time="2025-01-29T11:59:21.101738149Z" level=warning msg="cleaning up after shim disconnected" id=42c9848721a977e5b2e286c8adb48166c5f7d5da711e401a8a96bfc11ac06dee namespace=k8s.io Jan 29 11:59:21.101986 containerd[2099]: time="2025-01-29T11:59:21.101758922Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:59:21.371520 containerd[2099]: time="2025-01-29T11:59:21.369826170Z" level=info msg="CreateContainer within sandbox \"1afaa95969e1279a966f28d82f32623e600d23ad2b85640e9c5627bf3c7fd5ed\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 11:59:21.425125 containerd[2099]: time="2025-01-29T11:59:21.425083112Z" level=info msg="CreateContainer within sandbox \"1afaa95969e1279a966f28d82f32623e600d23ad2b85640e9c5627bf3c7fd5ed\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bdfea5c23aff25155352fa4fb555a6cc3f336a602e1960baa896f912c1cde374\"" Jan 29 11:59:21.427781 containerd[2099]: time="2025-01-29T11:59:21.425998323Z" level=info msg="StartContainer for \"bdfea5c23aff25155352fa4fb555a6cc3f336a602e1960baa896f912c1cde374\"" Jan 29 11:59:21.497650 containerd[2099]: time="2025-01-29T11:59:21.497583276Z" level=info msg="StartContainer for \"bdfea5c23aff25155352fa4fb555a6cc3f336a602e1960baa896f912c1cde374\" returns successfully" Jan 29 11:59:21.512077 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:59:21.513691 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:59:21.514006 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:59:21.535047 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:59:21.572962 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:59:21.583234 containerd[2099]: time="2025-01-29T11:59:21.583166487Z" level=info msg="shim disconnected" id=bdfea5c23aff25155352fa4fb555a6cc3f336a602e1960baa896f912c1cde374 namespace=k8s.io Jan 29 11:59:21.583234 containerd[2099]: time="2025-01-29T11:59:21.583229177Z" level=warning msg="cleaning up after shim disconnected" id=bdfea5c23aff25155352fa4fb555a6cc3f336a602e1960baa896f912c1cde374 namespace=k8s.io Jan 29 11:59:21.583672 containerd[2099]: time="2025-01-29T11:59:21.583244822Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:59:21.818703 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42c9848721a977e5b2e286c8adb48166c5f7d5da711e401a8a96bfc11ac06dee-rootfs.mount: Deactivated successfully. Jan 29 11:59:22.087520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2017755092.mount: Deactivated successfully. Jan 29 11:59:22.387067 containerd[2099]: time="2025-01-29T11:59:22.386502778Z" level=info msg="CreateContainer within sandbox \"1afaa95969e1279a966f28d82f32623e600d23ad2b85640e9c5627bf3c7fd5ed\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 11:59:22.420433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3280854098.mount: Deactivated successfully. Jan 29 11:59:22.430831 containerd[2099]: time="2025-01-29T11:59:22.430789102Z" level=info msg="CreateContainer within sandbox \"1afaa95969e1279a966f28d82f32623e600d23ad2b85640e9c5627bf3c7fd5ed\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"710cc0f425b29cf734c309a17796304f16c103f5fa6f9502fa62f90721e9285e\"" Jan 29 11:59:22.432874 containerd[2099]: time="2025-01-29T11:59:22.432844151Z" level=info msg="StartContainer for \"710cc0f425b29cf734c309a17796304f16c103f5fa6f9502fa62f90721e9285e\"" Jan 29 11:59:22.564183 containerd[2099]: time="2025-01-29T11:59:22.564140603Z" level=info msg="StartContainer for \"710cc0f425b29cf734c309a17796304f16c103f5fa6f9502fa62f90721e9285e\" returns successfully" Jan 29 11:59:22.828988 containerd[2099]: time="2025-01-29T11:59:22.828666423Z" level=info msg="shim disconnected" id=710cc0f425b29cf734c309a17796304f16c103f5fa6f9502fa62f90721e9285e namespace=k8s.io Jan 29 11:59:22.829470 containerd[2099]: time="2025-01-29T11:59:22.828728352Z" level=warning msg="cleaning up after shim disconnected" id=710cc0f425b29cf734c309a17796304f16c103f5fa6f9502fa62f90721e9285e namespace=k8s.io Jan 29 11:59:22.829470 containerd[2099]: time="2025-01-29T11:59:22.829266420Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:59:22.853339 containerd[2099]: time="2025-01-29T11:59:22.853199441Z" level=warning msg="cleanup warnings time=\"2025-01-29T11:59:22Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 11:59:22.939464 containerd[2099]: time="2025-01-29T11:59:22.939416827Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:59:22.940747 containerd[2099]: time="2025-01-29T11:59:22.940591507Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 29 11:59:22.942014 containerd[2099]: time="2025-01-29T11:59:22.941740732Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:59:22.943222 containerd[2099]: time="2025-01-29T11:59:22.943191184Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.218053916s" Jan 29 11:59:22.943373 containerd[2099]: time="2025-01-29T11:59:22.943350607Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 29 11:59:22.946425 containerd[2099]: time="2025-01-29T11:59:22.946394600Z" level=info msg="CreateContainer within sandbox \"339f17df78eff15909c00d4eeae5d9d5ff19a0bbfd3490c04fa4957b605dbdea\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 11:59:22.970385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount962935399.mount: Deactivated successfully. Jan 29 11:59:22.972748 containerd[2099]: time="2025-01-29T11:59:22.972706268Z" level=info msg="CreateContainer within sandbox \"339f17df78eff15909c00d4eeae5d9d5ff19a0bbfd3490c04fa4957b605dbdea\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"96d42cb1c8030470b43e92376fa5c246f34878104a9b1610ef8c3356b1b0eed1\"" Jan 29 11:59:22.973575 containerd[2099]: time="2025-01-29T11:59:22.973544985Z" level=info msg="StartContainer for \"96d42cb1c8030470b43e92376fa5c246f34878104a9b1610ef8c3356b1b0eed1\"" Jan 29 11:59:23.047334 containerd[2099]: time="2025-01-29T11:59:23.047244052Z" level=info msg="StartContainer for \"96d42cb1c8030470b43e92376fa5c246f34878104a9b1610ef8c3356b1b0eed1\" returns successfully" Jan 29 11:59:23.398000 containerd[2099]: time="2025-01-29T11:59:23.397958210Z" level=info msg="CreateContainer within sandbox \"1afaa95969e1279a966f28d82f32623e600d23ad2b85640e9c5627bf3c7fd5ed\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 11:59:23.417962 kubelet[3378]: I0129 11:59:23.417735 3378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-z5ldf" podStartSLOduration=1.542245037 podStartE2EDuration="11.417710633s" podCreationTimestamp="2025-01-29 11:59:12 +0000 UTC" firstStartedPulling="2025-01-29 11:59:13.068810019 +0000 UTC m=+15.104812033" lastFinishedPulling="2025-01-29 11:59:22.944275605 +0000 UTC m=+24.980277629" observedRunningTime="2025-01-29 11:59:23.416175084 +0000 UTC m=+25.452177117" watchObservedRunningTime="2025-01-29 11:59:23.417710633 +0000 UTC m=+25.453712667" Jan 29 11:59:23.418765 containerd[2099]: time="2025-01-29T11:59:23.418214358Z" level=info msg="CreateContainer within sandbox \"1afaa95969e1279a966f28d82f32623e600d23ad2b85640e9c5627bf3c7fd5ed\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f0eb10f67b550c85f3758e10c736aac764c149c3a8220a8292967d687ae71e69\"" Jan 29 11:59:23.420497 containerd[2099]: time="2025-01-29T11:59:23.420135235Z" level=info msg="StartContainer for \"f0eb10f67b550c85f3758e10c736aac764c149c3a8220a8292967d687ae71e69\"" Jan 29 11:59:23.543356 containerd[2099]: time="2025-01-29T11:59:23.541890873Z" level=info msg="StartContainer for \"f0eb10f67b550c85f3758e10c736aac764c149c3a8220a8292967d687ae71e69\" returns successfully" Jan 29 11:59:23.609371 containerd[2099]: time="2025-01-29T11:59:23.609248599Z" level=info msg="shim disconnected" id=f0eb10f67b550c85f3758e10c736aac764c149c3a8220a8292967d687ae71e69 namespace=k8s.io Jan 29 11:59:23.609371 containerd[2099]: time="2025-01-29T11:59:23.609331905Z" level=warning msg="cleaning up after shim disconnected" id=f0eb10f67b550c85f3758e10c736aac764c149c3a8220a8292967d687ae71e69 namespace=k8s.io Jan 29 11:59:23.609371 containerd[2099]: time="2025-01-29T11:59:23.609344307Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:59:24.403147 containerd[2099]: time="2025-01-29T11:59:24.403107107Z" level=info msg="CreateContainer within sandbox \"1afaa95969e1279a966f28d82f32623e600d23ad2b85640e9c5627bf3c7fd5ed\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 11:59:24.437181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2728077530.mount: Deactivated successfully. Jan 29 11:59:24.438836 containerd[2099]: time="2025-01-29T11:59:24.438793240Z" level=info msg="CreateContainer within sandbox \"1afaa95969e1279a966f28d82f32623e600d23ad2b85640e9c5627bf3c7fd5ed\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3cb262c4a3ab15d0691c287038ccfb4c246814b0153f1b0cc3c5ab81a6b47916\"" Jan 29 11:59:24.440561 containerd[2099]: time="2025-01-29T11:59:24.440519334Z" level=info msg="StartContainer for \"3cb262c4a3ab15d0691c287038ccfb4c246814b0153f1b0cc3c5ab81a6b47916\"" Jan 29 11:59:24.531425 containerd[2099]: time="2025-01-29T11:59:24.531375910Z" level=info msg="StartContainer for \"3cb262c4a3ab15d0691c287038ccfb4c246814b0153f1b0cc3c5ab81a6b47916\" returns successfully" Jan 29 11:59:24.976811 kubelet[3378]: I0129 11:59:24.976272 3378 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 11:59:25.022920 kubelet[3378]: I0129 11:59:25.022848 3378 topology_manager.go:215] "Topology Admit Handler" podUID="ccdc9d4e-3dec-4c2c-8805-db5fbcb28da8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-2p4ch" Jan 29 11:59:25.027094 kubelet[3378]: I0129 11:59:25.027063 3378 topology_manager.go:215] "Topology Admit Handler" podUID="53225c76-b306-4b81-b3d3-537c25170df9" podNamespace="kube-system" podName="coredns-7db6d8ff4d-xcz8x" Jan 29 11:59:25.074422 kubelet[3378]: I0129 11:59:25.074128 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53225c76-b306-4b81-b3d3-537c25170df9-config-volume\") pod \"coredns-7db6d8ff4d-xcz8x\" (UID: \"53225c76-b306-4b81-b3d3-537c25170df9\") " pod="kube-system/coredns-7db6d8ff4d-xcz8x" Jan 29 11:59:25.075231 kubelet[3378]: I0129 11:59:25.075122 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccdc9d4e-3dec-4c2c-8805-db5fbcb28da8-config-volume\") pod \"coredns-7db6d8ff4d-2p4ch\" (UID: \"ccdc9d4e-3dec-4c2c-8805-db5fbcb28da8\") " pod="kube-system/coredns-7db6d8ff4d-2p4ch" Jan 29 11:59:25.075776 kubelet[3378]: I0129 11:59:25.075741 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lfrc\" (UniqueName: \"kubernetes.io/projected/ccdc9d4e-3dec-4c2c-8805-db5fbcb28da8-kube-api-access-4lfrc\") pod \"coredns-7db6d8ff4d-2p4ch\" (UID: \"ccdc9d4e-3dec-4c2c-8805-db5fbcb28da8\") " pod="kube-system/coredns-7db6d8ff4d-2p4ch" Jan 29 11:59:25.075887 kubelet[3378]: I0129 11:59:25.075837 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7lrx\" (UniqueName: \"kubernetes.io/projected/53225c76-b306-4b81-b3d3-537c25170df9-kube-api-access-s7lrx\") pod \"coredns-7db6d8ff4d-xcz8x\" (UID: \"53225c76-b306-4b81-b3d3-537c25170df9\") " pod="kube-system/coredns-7db6d8ff4d-xcz8x" Jan 29 11:59:25.335109 containerd[2099]: time="2025-01-29T11:59:25.334930911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2p4ch,Uid:ccdc9d4e-3dec-4c2c-8805-db5fbcb28da8,Namespace:kube-system,Attempt:0,}" Jan 29 11:59:25.356412 containerd[2099]: time="2025-01-29T11:59:25.352641595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xcz8x,Uid:53225c76-b306-4b81-b3d3-537c25170df9,Namespace:kube-system,Attempt:0,}" Jan 29 11:59:25.512386 kubelet[3378]: I0129 11:59:25.511617 3378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-p66n5" podStartSLOduration=5.70562529 podStartE2EDuration="13.511589097s" podCreationTimestamp="2025-01-29 11:59:12 +0000 UTC" firstStartedPulling="2025-01-29 11:59:12.918922251 +0000 UTC m=+14.954924264" lastFinishedPulling="2025-01-29 11:59:20.72488604 +0000 UTC m=+22.760888071" observedRunningTime="2025-01-29 11:59:25.505204275 +0000 UTC m=+27.541206310" watchObservedRunningTime="2025-01-29 11:59:25.511589097 +0000 UTC m=+27.547591130" Jan 29 11:59:27.301124 systemd-networkd[1652]: cilium_host: Link UP Jan 29 11:59:27.305886 systemd-networkd[1652]: cilium_net: Link UP Jan 29 11:59:27.306195 systemd-networkd[1652]: cilium_net: Gained carrier Jan 29 11:59:27.306361 (udev-worker)[4242]: Network interface NamePolicy= disabled on kernel command line. Jan 29 11:59:27.306493 systemd-networkd[1652]: cilium_host: Gained carrier Jan 29 11:59:27.308813 (udev-worker)[4244]: Network interface NamePolicy= disabled on kernel command line. Jan 29 11:59:27.509127 systemd-networkd[1652]: cilium_vxlan: Link UP Jan 29 11:59:27.509139 systemd-networkd[1652]: cilium_vxlan: Gained carrier Jan 29 11:59:27.749914 systemd-networkd[1652]: cilium_net: Gained IPv6LL Jan 29 11:59:28.152572 kernel: NET: Registered PF_ALG protocol family Jan 29 11:59:28.213647 systemd-networkd[1652]: cilium_host: Gained IPv6LL Jan 29 11:59:29.045362 systemd-networkd[1652]: lxc_health: Link UP Jan 29 11:59:29.050007 (udev-worker)[4305]: Network interface NamePolicy= disabled on kernel command line. Jan 29 11:59:29.061530 systemd-networkd[1652]: lxc_health: Gained carrier Jan 29 11:59:29.237630 systemd-networkd[1652]: cilium_vxlan: Gained IPv6LL Jan 29 11:59:29.513787 systemd-networkd[1652]: lxcf74bea10f0cb: Link UP Jan 29 11:59:29.520253 systemd-networkd[1652]: lxc3cb70bcb970b: Link UP Jan 29 11:59:29.529330 kernel: eth0: renamed from tmp81c56 Jan 29 11:59:29.539671 systemd-networkd[1652]: lxcf74bea10f0cb: Gained carrier Jan 29 11:59:29.544067 kernel: eth0: renamed from tmp7b142 Jan 29 11:59:29.545283 systemd-networkd[1652]: lxc3cb70bcb970b: Gained carrier Jan 29 11:59:30.584551 systemd-networkd[1652]: lxcf74bea10f0cb: Gained IPv6LL Jan 29 11:59:30.584906 systemd-networkd[1652]: lxc3cb70bcb970b: Gained IPv6LL Jan 29 11:59:31.029492 systemd-networkd[1652]: lxc_health: Gained IPv6LL Jan 29 11:59:33.630546 ntpd[2047]: Listen normally on 6 cilium_host 192.168.0.253:123 Jan 29 11:59:33.631594 ntpd[2047]: 29 Jan 11:59:33 ntpd[2047]: Listen normally on 6 cilium_host 192.168.0.253:123 Jan 29 11:59:33.631594 ntpd[2047]: 29 Jan 11:59:33 ntpd[2047]: Listen normally on 7 cilium_net [fe80::504b:5dff:fedc:c88c%4]:123 Jan 29 11:59:33.631594 ntpd[2047]: 29 Jan 11:59:33 ntpd[2047]: Listen normally on 8 cilium_host [fe80::8cc5:adff:fe7d:8173%5]:123 Jan 29 11:59:33.631594 ntpd[2047]: 29 Jan 11:59:33 ntpd[2047]: Listen normally on 9 cilium_vxlan [fe80::3811:83ff:fe54:e9f1%6]:123 Jan 29 11:59:33.631594 ntpd[2047]: 29 Jan 11:59:33 ntpd[2047]: Listen normally on 10 lxc_health [fe80::fc18:8dff:fe89:88c4%8]:123 Jan 29 11:59:33.631594 ntpd[2047]: 29 Jan 11:59:33 ntpd[2047]: Listen normally on 11 lxc3cb70bcb970b [fe80::24ea:fbff:fe92:52d0%10]:123 Jan 29 11:59:33.631594 ntpd[2047]: 29 Jan 11:59:33 ntpd[2047]: Listen normally on 12 lxcf74bea10f0cb [fe80::2c1c:6fff:fee2:41d9%12]:123 Jan 29 11:59:33.630648 ntpd[2047]: Listen normally on 7 cilium_net [fe80::504b:5dff:fedc:c88c%4]:123 Jan 29 11:59:33.630703 ntpd[2047]: Listen normally on 8 cilium_host [fe80::8cc5:adff:fe7d:8173%5]:123 Jan 29 11:59:33.630744 ntpd[2047]: Listen normally on 9 cilium_vxlan [fe80::3811:83ff:fe54:e9f1%6]:123 Jan 29 11:59:33.630790 ntpd[2047]: Listen normally on 10 lxc_health [fe80::fc18:8dff:fe89:88c4%8]:123 Jan 29 11:59:33.630831 ntpd[2047]: Listen normally on 11 lxc3cb70bcb970b [fe80::24ea:fbff:fe92:52d0%10]:123 Jan 29 11:59:33.630868 ntpd[2047]: Listen normally on 12 lxcf74bea10f0cb [fe80::2c1c:6fff:fee2:41d9%12]:123 Jan 29 11:59:34.620080 containerd[2099]: time="2025-01-29T11:59:34.619925759Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:59:34.620971 containerd[2099]: time="2025-01-29T11:59:34.620243147Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:59:34.621283 containerd[2099]: time="2025-01-29T11:59:34.620662734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:59:34.621465 containerd[2099]: time="2025-01-29T11:59:34.621263083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:59:34.676765 containerd[2099]: time="2025-01-29T11:59:34.676094721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:59:34.676765 containerd[2099]: time="2025-01-29T11:59:34.676188257Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:59:34.676765 containerd[2099]: time="2025-01-29T11:59:34.676207291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:59:34.677003 containerd[2099]: time="2025-01-29T11:59:34.676947618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:59:34.785051 containerd[2099]: time="2025-01-29T11:59:34.784996801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xcz8x,Uid:53225c76-b306-4b81-b3d3-537c25170df9,Namespace:kube-system,Attempt:0,} returns sandbox id \"81c56a80f55d2d49bc421ee69b6fee130647439b7cab665b0fd07a8a21ed65b0\"" Jan 29 11:59:34.791495 containerd[2099]: time="2025-01-29T11:59:34.791400641Z" level=info msg="CreateContainer within sandbox \"81c56a80f55d2d49bc421ee69b6fee130647439b7cab665b0fd07a8a21ed65b0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:59:34.837610 containerd[2099]: time="2025-01-29T11:59:34.837542979Z" level=info msg="CreateContainer within sandbox \"81c56a80f55d2d49bc421ee69b6fee130647439b7cab665b0fd07a8a21ed65b0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f93de895799028bce530d92b68bd3d669bc8b331c70126c252e45eec4b05d5fb\"" Jan 29 11:59:34.840400 containerd[2099]: time="2025-01-29T11:59:34.838541622Z" level=info msg="StartContainer for \"f93de895799028bce530d92b68bd3d669bc8b331c70126c252e45eec4b05d5fb\"" Jan 29 11:59:34.849088 containerd[2099]: time="2025-01-29T11:59:34.849052489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2p4ch,Uid:ccdc9d4e-3dec-4c2c-8805-db5fbcb28da8,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b14241abd2932b3fdd2db7cf2b144e7f273b402d8da7ac9bdb48e319f33b311\"" Jan 29 11:59:34.854044 containerd[2099]: time="2025-01-29T11:59:34.854008558Z" level=info msg="CreateContainer within sandbox \"7b14241abd2932b3fdd2db7cf2b144e7f273b402d8da7ac9bdb48e319f33b311\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:59:34.871378 containerd[2099]: time="2025-01-29T11:59:34.870791663Z" level=info msg="CreateContainer within sandbox \"7b14241abd2932b3fdd2db7cf2b144e7f273b402d8da7ac9bdb48e319f33b311\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6334b95efd94524ec7b23ec5840ebd5df5754a2a94f61e8a3488570bb8cd824e\"" Jan 29 11:59:34.872117 containerd[2099]: time="2025-01-29T11:59:34.872089515Z" level=info msg="StartContainer for \"6334b95efd94524ec7b23ec5840ebd5df5754a2a94f61e8a3488570bb8cd824e\"" Jan 29 11:59:34.889154 kubelet[3378]: I0129 11:59:34.888167 3378 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:59:34.961207 containerd[2099]: time="2025-01-29T11:59:34.961161456Z" level=info msg="StartContainer for \"f93de895799028bce530d92b68bd3d669bc8b331c70126c252e45eec4b05d5fb\" returns successfully" Jan 29 11:59:34.985297 containerd[2099]: time="2025-01-29T11:59:34.985254898Z" level=info msg="StartContainer for \"6334b95efd94524ec7b23ec5840ebd5df5754a2a94f61e8a3488570bb8cd824e\" returns successfully" Jan 29 11:59:35.601770 kubelet[3378]: I0129 11:59:35.601703 3378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-xcz8x" podStartSLOduration=23.601678814 podStartE2EDuration="23.601678814s" podCreationTimestamp="2025-01-29 11:59:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:59:35.570011984 +0000 UTC m=+37.606014028" watchObservedRunningTime="2025-01-29 11:59:35.601678814 +0000 UTC m=+37.637680844" Jan 29 11:59:35.640803 kubelet[3378]: I0129 11:59:35.640711 3378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-2p4ch" podStartSLOduration=23.640672963 podStartE2EDuration="23.640672963s" podCreationTimestamp="2025-01-29 11:59:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:59:35.604608344 +0000 UTC m=+37.640610377" watchObservedRunningTime="2025-01-29 11:59:35.640672963 +0000 UTC m=+37.676674995" Jan 29 11:59:35.642141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3788741131.mount: Deactivated successfully. Jan 29 11:59:40.351589 systemd[1]: Started sshd@7-172.31.24.218:22-139.178.68.195:60634.service - OpenSSH per-connection server daemon (139.178.68.195:60634). Jan 29 11:59:40.537850 sshd[4826]: Accepted publickey for core from 139.178.68.195 port 60634 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 11:59:40.540244 sshd[4826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:59:40.564412 systemd-logind[2069]: New session 8 of user core. Jan 29 11:59:40.567878 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:59:41.314754 sshd[4826]: pam_unix(sshd:session): session closed for user core Jan 29 11:59:41.320165 systemd[1]: sshd@7-172.31.24.218:22-139.178.68.195:60634.service: Deactivated successfully. Jan 29 11:59:41.329252 systemd-logind[2069]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:59:41.332124 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:59:41.333225 systemd-logind[2069]: Removed session 8. Jan 29 11:59:46.343068 systemd[1]: Started sshd@8-172.31.24.218:22-139.178.68.195:49912.service - OpenSSH per-connection server daemon (139.178.68.195:49912). Jan 29 11:59:46.495459 sshd[4843]: Accepted publickey for core from 139.178.68.195 port 49912 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 11:59:46.497267 sshd[4843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:59:46.504509 systemd-logind[2069]: New session 9 of user core. Jan 29 11:59:46.511683 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 11:59:46.724370 sshd[4843]: pam_unix(sshd:session): session closed for user core Jan 29 11:59:46.728093 systemd[1]: sshd@8-172.31.24.218:22-139.178.68.195:49912.service: Deactivated successfully. Jan 29 11:59:46.734031 systemd-logind[2069]: Session 9 logged out. Waiting for processes to exit. Jan 29 11:59:46.734768 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 11:59:46.736994 systemd-logind[2069]: Removed session 9. Jan 29 11:59:51.754936 systemd[1]: Started sshd@9-172.31.24.218:22-139.178.68.195:49920.service - OpenSSH per-connection server daemon (139.178.68.195:49920). Jan 29 11:59:51.907061 sshd[4858]: Accepted publickey for core from 139.178.68.195 port 49920 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 11:59:51.908746 sshd[4858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:59:51.915146 systemd-logind[2069]: New session 10 of user core. Jan 29 11:59:51.920678 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 11:59:52.125893 sshd[4858]: pam_unix(sshd:session): session closed for user core Jan 29 11:59:52.130070 systemd[1]: sshd@9-172.31.24.218:22-139.178.68.195:49920.service: Deactivated successfully. Jan 29 11:59:52.138411 systemd-logind[2069]: Session 10 logged out. Waiting for processes to exit. Jan 29 11:59:52.139360 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 11:59:52.140788 systemd-logind[2069]: Removed session 10. Jan 29 11:59:57.156680 systemd[1]: Started sshd@10-172.31.24.218:22-139.178.68.195:44052.service - OpenSSH per-connection server daemon (139.178.68.195:44052). Jan 29 11:59:57.339840 sshd[4873]: Accepted publickey for core from 139.178.68.195 port 44052 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 11:59:57.341767 sshd[4873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:59:57.347246 systemd-logind[2069]: New session 11 of user core. Jan 29 11:59:57.352801 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 11:59:57.583114 sshd[4873]: pam_unix(sshd:session): session closed for user core Jan 29 11:59:57.598571 systemd[1]: sshd@10-172.31.24.218:22-139.178.68.195:44052.service: Deactivated successfully. Jan 29 11:59:57.605099 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 11:59:57.606555 systemd-logind[2069]: Session 11 logged out. Waiting for processes to exit. Jan 29 11:59:57.613733 systemd[1]: Started sshd@11-172.31.24.218:22-139.178.68.195:44068.service - OpenSSH per-connection server daemon (139.178.68.195:44068). Jan 29 11:59:57.615449 systemd-logind[2069]: Removed session 11. Jan 29 11:59:57.799660 sshd[4888]: Accepted publickey for core from 139.178.68.195 port 44068 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 11:59:57.800605 sshd[4888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:59:57.816558 systemd-logind[2069]: New session 12 of user core. Jan 29 11:59:57.823667 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 11:59:58.192325 sshd[4888]: pam_unix(sshd:session): session closed for user core Jan 29 11:59:58.212157 systemd[1]: sshd@11-172.31.24.218:22-139.178.68.195:44068.service: Deactivated successfully. Jan 29 11:59:58.219714 systemd-logind[2069]: Session 12 logged out. Waiting for processes to exit. Jan 29 11:59:58.240218 systemd[1]: Started sshd@12-172.31.24.218:22-139.178.68.195:44078.service - OpenSSH per-connection server daemon (139.178.68.195:44078). Jan 29 11:59:58.243092 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 11:59:58.250195 systemd-logind[2069]: Removed session 12. Jan 29 11:59:58.423536 sshd[4902]: Accepted publickey for core from 139.178.68.195 port 44078 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 11:59:58.425146 sshd[4902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:59:58.431296 systemd-logind[2069]: New session 13 of user core. Jan 29 11:59:58.434750 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 11:59:58.645817 sshd[4902]: pam_unix(sshd:session): session closed for user core Jan 29 11:59:58.650131 systemd-logind[2069]: Session 13 logged out. Waiting for processes to exit. Jan 29 11:59:58.651739 systemd[1]: sshd@12-172.31.24.218:22-139.178.68.195:44078.service: Deactivated successfully. Jan 29 11:59:58.657186 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 11:59:58.658505 systemd-logind[2069]: Removed session 13. Jan 29 12:00:03.675104 systemd[1]: Started sshd@13-172.31.24.218:22-139.178.68.195:44094.service - OpenSSH per-connection server daemon (139.178.68.195:44094). Jan 29 12:00:03.837196 sshd[4917]: Accepted publickey for core from 139.178.68.195 port 44094 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:00:03.838789 sshd[4917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:03.844279 systemd-logind[2069]: New session 14 of user core. Jan 29 12:00:03.850768 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 12:00:04.049413 sshd[4917]: pam_unix(sshd:session): session closed for user core Jan 29 12:00:04.054303 systemd[1]: sshd@13-172.31.24.218:22-139.178.68.195:44094.service: Deactivated successfully. Jan 29 12:00:04.061927 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 12:00:04.062257 systemd-logind[2069]: Session 14 logged out. Waiting for processes to exit. Jan 29 12:00:04.064225 systemd-logind[2069]: Removed session 14. Jan 29 12:00:09.083795 systemd[1]: Started sshd@14-172.31.24.218:22-139.178.68.195:43084.service - OpenSSH per-connection server daemon (139.178.68.195:43084). Jan 29 12:00:09.253714 sshd[4932]: Accepted publickey for core from 139.178.68.195 port 43084 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:00:09.255553 sshd[4932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:09.262955 systemd-logind[2069]: New session 15 of user core. Jan 29 12:00:09.270093 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 12:00:09.484281 sshd[4932]: pam_unix(sshd:session): session closed for user core Jan 29 12:00:09.489858 systemd[1]: sshd@14-172.31.24.218:22-139.178.68.195:43084.service: Deactivated successfully. Jan 29 12:00:09.494872 systemd-logind[2069]: Session 15 logged out. Waiting for processes to exit. Jan 29 12:00:09.495712 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 12:00:09.497128 systemd-logind[2069]: Removed session 15. Jan 29 12:00:14.512145 systemd[1]: Started sshd@15-172.31.24.218:22-139.178.68.195:43096.service - OpenSSH per-connection server daemon (139.178.68.195:43096). Jan 29 12:00:14.677066 sshd[4948]: Accepted publickey for core from 139.178.68.195 port 43096 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:00:14.680669 sshd[4948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:14.692532 systemd-logind[2069]: New session 16 of user core. Jan 29 12:00:14.697633 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 12:00:14.917431 sshd[4948]: pam_unix(sshd:session): session closed for user core Jan 29 12:00:14.928988 systemd[1]: sshd@15-172.31.24.218:22-139.178.68.195:43096.service: Deactivated successfully. Jan 29 12:00:14.939018 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 12:00:14.942807 systemd-logind[2069]: Session 16 logged out. Waiting for processes to exit. Jan 29 12:00:14.945385 systemd-logind[2069]: Removed session 16. Jan 29 12:00:14.950740 systemd[1]: Started sshd@16-172.31.24.218:22-139.178.68.195:53164.service - OpenSSH per-connection server daemon (139.178.68.195:53164). Jan 29 12:00:15.113414 sshd[4962]: Accepted publickey for core from 139.178.68.195 port 53164 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:00:15.115588 sshd[4962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:15.121797 systemd-logind[2069]: New session 17 of user core. Jan 29 12:00:15.128658 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 12:00:15.817453 sshd[4962]: pam_unix(sshd:session): session closed for user core Jan 29 12:00:15.831909 systemd[1]: sshd@16-172.31.24.218:22-139.178.68.195:53164.service: Deactivated successfully. Jan 29 12:00:15.836661 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 12:00:15.846182 systemd-logind[2069]: Session 17 logged out. Waiting for processes to exit. Jan 29 12:00:15.858507 systemd[1]: Started sshd@17-172.31.24.218:22-139.178.68.195:53168.service - OpenSSH per-connection server daemon (139.178.68.195:53168). Jan 29 12:00:15.860235 systemd-logind[2069]: Removed session 17. Jan 29 12:00:16.035293 sshd[4974]: Accepted publickey for core from 139.178.68.195 port 53168 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:00:16.045362 sshd[4974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:16.067191 systemd-logind[2069]: New session 18 of user core. Jan 29 12:00:16.077627 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 12:00:18.132617 sshd[4974]: pam_unix(sshd:session): session closed for user core Jan 29 12:00:18.138683 systemd[1]: sshd@17-172.31.24.218:22-139.178.68.195:53168.service: Deactivated successfully. Jan 29 12:00:18.150507 systemd-logind[2069]: Session 18 logged out. Waiting for processes to exit. Jan 29 12:00:18.153457 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 12:00:18.173144 systemd[1]: Started sshd@18-172.31.24.218:22-139.178.68.195:53176.service - OpenSSH per-connection server daemon (139.178.68.195:53176). Jan 29 12:00:18.176425 systemd-logind[2069]: Removed session 18. Jan 29 12:00:18.336147 sshd[4993]: Accepted publickey for core from 139.178.68.195 port 53176 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:00:18.338220 sshd[4993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:18.343612 systemd-logind[2069]: New session 19 of user core. Jan 29 12:00:18.347932 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 12:00:18.837789 systemd-resolved[1973]: Under memory pressure, flushing caches. Jan 29 12:00:18.839732 systemd-journald[1574]: Under memory pressure, flushing caches. Jan 29 12:00:18.837849 systemd-resolved[1973]: Flushed all caches. Jan 29 12:00:19.072811 sshd[4993]: pam_unix(sshd:session): session closed for user core Jan 29 12:00:19.077962 systemd-logind[2069]: Session 19 logged out. Waiting for processes to exit. Jan 29 12:00:19.078892 systemd[1]: sshd@18-172.31.24.218:22-139.178.68.195:53176.service: Deactivated successfully. Jan 29 12:00:19.083960 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 12:00:19.085826 systemd-logind[2069]: Removed session 19. Jan 29 12:00:19.100741 systemd[1]: Started sshd@19-172.31.24.218:22-139.178.68.195:53178.service - OpenSSH per-connection server daemon (139.178.68.195:53178). Jan 29 12:00:19.256830 sshd[5005]: Accepted publickey for core from 139.178.68.195 port 53178 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:00:19.258425 sshd[5005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:19.263936 systemd-logind[2069]: New session 20 of user core. Jan 29 12:00:19.270367 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 12:00:19.477047 sshd[5005]: pam_unix(sshd:session): session closed for user core Jan 29 12:00:19.483564 systemd-logind[2069]: Session 20 logged out. Waiting for processes to exit. Jan 29 12:00:19.484816 systemd[1]: sshd@19-172.31.24.218:22-139.178.68.195:53178.service: Deactivated successfully. Jan 29 12:00:19.488058 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 12:00:19.489760 systemd-logind[2069]: Removed session 20. Jan 29 12:00:24.508665 systemd[1]: Started sshd@20-172.31.24.218:22-139.178.68.195:53182.service - OpenSSH per-connection server daemon (139.178.68.195:53182). Jan 29 12:00:24.669733 sshd[5022]: Accepted publickey for core from 139.178.68.195 port 53182 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:00:24.670457 sshd[5022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:24.675293 systemd-logind[2069]: New session 21 of user core. Jan 29 12:00:24.687655 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 12:00:24.879069 sshd[5022]: pam_unix(sshd:session): session closed for user core Jan 29 12:00:24.883046 systemd[1]: sshd@20-172.31.24.218:22-139.178.68.195:53182.service: Deactivated successfully. Jan 29 12:00:24.889692 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 12:00:24.889978 systemd-logind[2069]: Session 21 logged out. Waiting for processes to exit. Jan 29 12:00:24.892613 systemd-logind[2069]: Removed session 21. Jan 29 12:00:29.908996 systemd[1]: Started sshd@21-172.31.24.218:22-139.178.68.195:54800.service - OpenSSH per-connection server daemon (139.178.68.195:54800). Jan 29 12:00:30.067618 sshd[5036]: Accepted publickey for core from 139.178.68.195 port 54800 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:00:30.069154 sshd[5036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:30.074384 systemd-logind[2069]: New session 22 of user core. Jan 29 12:00:30.080743 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 12:00:30.269774 sshd[5036]: pam_unix(sshd:session): session closed for user core Jan 29 12:00:30.274402 systemd[1]: sshd@21-172.31.24.218:22-139.178.68.195:54800.service: Deactivated successfully. Jan 29 12:00:30.279922 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 12:00:30.281028 systemd-logind[2069]: Session 22 logged out. Waiting for processes to exit. Jan 29 12:00:30.282228 systemd-logind[2069]: Removed session 22. Jan 29 12:00:35.298496 systemd[1]: Started sshd@22-172.31.24.218:22-139.178.68.195:50998.service - OpenSSH per-connection server daemon (139.178.68.195:50998). Jan 29 12:00:35.491397 sshd[5051]: Accepted publickey for core from 139.178.68.195 port 50998 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:00:35.493562 sshd[5051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:35.500577 systemd-logind[2069]: New session 23 of user core. Jan 29 12:00:35.512703 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 12:00:35.776826 sshd[5051]: pam_unix(sshd:session): session closed for user core Jan 29 12:00:35.785601 systemd[1]: sshd@22-172.31.24.218:22-139.178.68.195:50998.service: Deactivated successfully. Jan 29 12:00:35.787421 systemd-logind[2069]: Session 23 logged out. Waiting for processes to exit. Jan 29 12:00:35.792606 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 12:00:35.794854 systemd-logind[2069]: Removed session 23. Jan 29 12:00:35.805948 systemd[1]: Started sshd@23-172.31.24.218:22-139.178.68.195:51010.service - OpenSSH per-connection server daemon (139.178.68.195:51010). Jan 29 12:00:35.974422 sshd[5064]: Accepted publickey for core from 139.178.68.195 port 51010 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:00:35.976487 sshd[5064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:35.983448 systemd-logind[2069]: New session 24 of user core. Jan 29 12:00:35.989795 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 12:00:37.900835 containerd[2099]: time="2025-01-29T12:00:37.900786558Z" level=info msg="StopContainer for \"96d42cb1c8030470b43e92376fa5c246f34878104a9b1610ef8c3356b1b0eed1\" with timeout 30 (s)" Jan 29 12:00:37.901456 containerd[2099]: time="2025-01-29T12:00:37.901408052Z" level=info msg="Stop container \"96d42cb1c8030470b43e92376fa5c246f34878104a9b1610ef8c3356b1b0eed1\" with signal terminated" Jan 29 12:00:37.914280 containerd[2099]: time="2025-01-29T12:00:37.914150879Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 12:00:37.918542 containerd[2099]: time="2025-01-29T12:00:37.918507622Z" level=info msg="StopContainer for \"3cb262c4a3ab15d0691c287038ccfb4c246814b0153f1b0cc3c5ab81a6b47916\" with timeout 2 (s)" Jan 29 12:00:37.918870 containerd[2099]: time="2025-01-29T12:00:37.918841327Z" level=info msg="Stop container \"3cb262c4a3ab15d0691c287038ccfb4c246814b0153f1b0cc3c5ab81a6b47916\" with signal terminated" Jan 29 12:00:37.940447 systemd-networkd[1652]: lxc_health: Link DOWN Jan 29 12:00:37.940455 systemd-networkd[1652]: lxc_health: Lost carrier Jan 29 12:00:37.993200 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96d42cb1c8030470b43e92376fa5c246f34878104a9b1610ef8c3356b1b0eed1-rootfs.mount: Deactivated successfully. Jan 29 12:00:38.006868 containerd[2099]: time="2025-01-29T12:00:38.006802070Z" level=info msg="shim disconnected" id=96d42cb1c8030470b43e92376fa5c246f34878104a9b1610ef8c3356b1b0eed1 namespace=k8s.io Jan 29 12:00:38.006868 containerd[2099]: time="2025-01-29T12:00:38.006862550Z" level=warning msg="cleaning up after shim disconnected" id=96d42cb1c8030470b43e92376fa5c246f34878104a9b1610ef8c3356b1b0eed1 namespace=k8s.io Jan 29 12:00:38.006868 containerd[2099]: time="2025-01-29T12:00:38.006873102Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:00:38.019124 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3cb262c4a3ab15d0691c287038ccfb4c246814b0153f1b0cc3c5ab81a6b47916-rootfs.mount: Deactivated successfully. Jan 29 12:00:38.024941 containerd[2099]: time="2025-01-29T12:00:38.024872072Z" level=info msg="shim disconnected" id=3cb262c4a3ab15d0691c287038ccfb4c246814b0153f1b0cc3c5ab81a6b47916 namespace=k8s.io Jan 29 12:00:38.024941 containerd[2099]: time="2025-01-29T12:00:38.024935986Z" level=warning msg="cleaning up after shim disconnected" id=3cb262c4a3ab15d0691c287038ccfb4c246814b0153f1b0cc3c5ab81a6b47916 namespace=k8s.io Jan 29 12:00:38.025146 containerd[2099]: time="2025-01-29T12:00:38.024947054Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:00:38.046353 containerd[2099]: time="2025-01-29T12:00:38.045452610Z" level=info msg="StopContainer for \"96d42cb1c8030470b43e92376fa5c246f34878104a9b1610ef8c3356b1b0eed1\" returns successfully" Jan 29 12:00:38.057289 containerd[2099]: time="2025-01-29T12:00:38.055959446Z" level=info msg="StopPodSandbox for \"339f17df78eff15909c00d4eeae5d9d5ff19a0bbfd3490c04fa4957b605dbdea\"" Jan 29 12:00:38.057289 containerd[2099]: time="2025-01-29T12:00:38.056032487Z" level=info msg="Container to stop \"96d42cb1c8030470b43e92376fa5c246f34878104a9b1610ef8c3356b1b0eed1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:00:38.060883 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-339f17df78eff15909c00d4eeae5d9d5ff19a0bbfd3490c04fa4957b605dbdea-shm.mount: Deactivated successfully. Jan 29 12:00:38.064894 containerd[2099]: time="2025-01-29T12:00:38.064341118Z" level=warning msg="cleanup warnings time=\"2025-01-29T12:00:38Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 12:00:38.071174 containerd[2099]: time="2025-01-29T12:00:38.071132337Z" level=info msg="StopContainer for \"3cb262c4a3ab15d0691c287038ccfb4c246814b0153f1b0cc3c5ab81a6b47916\" returns successfully" Jan 29 12:00:38.073509 containerd[2099]: time="2025-01-29T12:00:38.072293862Z" level=info msg="StopPodSandbox for \"1afaa95969e1279a966f28d82f32623e600d23ad2b85640e9c5627bf3c7fd5ed\"" Jan 29 12:00:38.073509 containerd[2099]: time="2025-01-29T12:00:38.072381433Z" level=info msg="Container to stop \"42c9848721a977e5b2e286c8adb48166c5f7d5da711e401a8a96bfc11ac06dee\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:00:38.073509 containerd[2099]: time="2025-01-29T12:00:38.072401181Z" level=info msg="Container to stop \"bdfea5c23aff25155352fa4fb555a6cc3f336a602e1960baa896f912c1cde374\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:00:38.073509 containerd[2099]: time="2025-01-29T12:00:38.072459411Z" level=info msg="Container to stop \"710cc0f425b29cf734c309a17796304f16c103f5fa6f9502fa62f90721e9285e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:00:38.073509 containerd[2099]: time="2025-01-29T12:00:38.072474156Z" level=info msg="Container to stop \"f0eb10f67b550c85f3758e10c736aac764c149c3a8220a8292967d687ae71e69\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:00:38.073509 containerd[2099]: time="2025-01-29T12:00:38.072507290Z" level=info msg="Container to stop \"3cb262c4a3ab15d0691c287038ccfb4c246814b0153f1b0cc3c5ab81a6b47916\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:00:38.078433 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1afaa95969e1279a966f28d82f32623e600d23ad2b85640e9c5627bf3c7fd5ed-shm.mount: Deactivated successfully. Jan 29 12:00:38.153741 containerd[2099]: time="2025-01-29T12:00:38.153580329Z" level=info msg="shim disconnected" id=1afaa95969e1279a966f28d82f32623e600d23ad2b85640e9c5627bf3c7fd5ed namespace=k8s.io Jan 29 12:00:38.153741 containerd[2099]: time="2025-01-29T12:00:38.153657712Z" level=warning msg="cleaning up after shim disconnected" id=1afaa95969e1279a966f28d82f32623e600d23ad2b85640e9c5627bf3c7fd5ed namespace=k8s.io Jan 29 12:00:38.153741 containerd[2099]: time="2025-01-29T12:00:38.153671537Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:00:38.165514 containerd[2099]: time="2025-01-29T12:00:38.165451309Z" level=info msg="shim disconnected" id=339f17df78eff15909c00d4eeae5d9d5ff19a0bbfd3490c04fa4957b605dbdea namespace=k8s.io Jan 29 12:00:38.165862 containerd[2099]: time="2025-01-29T12:00:38.165756305Z" level=warning msg="cleaning up after shim disconnected" id=339f17df78eff15909c00d4eeae5d9d5ff19a0bbfd3490c04fa4957b605dbdea namespace=k8s.io Jan 29 12:00:38.165862 containerd[2099]: time="2025-01-29T12:00:38.165778403Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:00:38.192933 containerd[2099]: time="2025-01-29T12:00:38.192742955Z" level=info msg="TearDown network for sandbox \"1afaa95969e1279a966f28d82f32623e600d23ad2b85640e9c5627bf3c7fd5ed\" successfully" Jan 29 12:00:38.192933 containerd[2099]: time="2025-01-29T12:00:38.192792306Z" level=info msg="StopPodSandbox for \"1afaa95969e1279a966f28d82f32623e600d23ad2b85640e9c5627bf3c7fd5ed\" returns successfully" Jan 29 12:00:38.195597 containerd[2099]: time="2025-01-29T12:00:38.195565535Z" level=warning msg="cleanup warnings time=\"2025-01-29T12:00:38Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 12:00:38.196684 containerd[2099]: time="2025-01-29T12:00:38.196655335Z" level=info msg="TearDown network for sandbox \"339f17df78eff15909c00d4eeae5d9d5ff19a0bbfd3490c04fa4957b605dbdea\" successfully" Jan 29 12:00:38.196684 containerd[2099]: time="2025-01-29T12:00:38.196679223Z" level=info msg="StopPodSandbox for \"339f17df78eff15909c00d4eeae5d9d5ff19a0bbfd3490c04fa4957b605dbdea\" returns successfully" Jan 29 12:00:38.364975 kubelet[3378]: I0129 12:00:38.364921 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/76a569f5-e2e9-49c0-a8e1-9d80860a0223-hostproc\") pod \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\" (UID: \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\") " Jan 29 12:00:38.366998 kubelet[3378]: I0129 12:00:38.365002 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/76a569f5-e2e9-49c0-a8e1-9d80860a0223-bpf-maps\") pod \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\" (UID: \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\") " Jan 29 12:00:38.366998 kubelet[3378]: I0129 12:00:38.365023 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/76a569f5-e2e9-49c0-a8e1-9d80860a0223-xtables-lock\") pod \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\" (UID: \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\") " Jan 29 12:00:38.366998 kubelet[3378]: I0129 12:00:38.365055 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4dvj\" (UniqueName: \"kubernetes.io/projected/76a569f5-e2e9-49c0-a8e1-9d80860a0223-kube-api-access-q4dvj\") pod \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\" (UID: \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\") " Jan 29 12:00:38.366998 kubelet[3378]: I0129 12:00:38.365078 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/76a569f5-e2e9-49c0-a8e1-9d80860a0223-cilium-run\") pod \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\" (UID: \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\") " Jan 29 12:00:38.366998 kubelet[3378]: I0129 12:00:38.365123 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmmdc\" (UniqueName: \"kubernetes.io/projected/cd0b4b26-c048-44be-a077-43b9f9d423ed-kube-api-access-fmmdc\") pod \"cd0b4b26-c048-44be-a077-43b9f9d423ed\" (UID: \"cd0b4b26-c048-44be-a077-43b9f9d423ed\") " Jan 29 12:00:38.366998 kubelet[3378]: I0129 12:00:38.365152 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cd0b4b26-c048-44be-a077-43b9f9d423ed-cilium-config-path\") pod \"cd0b4b26-c048-44be-a077-43b9f9d423ed\" (UID: \"cd0b4b26-c048-44be-a077-43b9f9d423ed\") " Jan 29 12:00:38.367283 kubelet[3378]: I0129 12:00:38.365173 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/76a569f5-e2e9-49c0-a8e1-9d80860a0223-host-proc-sys-net\") pod \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\" (UID: \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\") " Jan 29 12:00:38.367283 kubelet[3378]: I0129 12:00:38.365193 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/76a569f5-e2e9-49c0-a8e1-9d80860a0223-cni-path\") pod \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\" (UID: \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\") " Jan 29 12:00:38.367283 kubelet[3378]: I0129 12:00:38.365214 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/76a569f5-e2e9-49c0-a8e1-9d80860a0223-host-proc-sys-kernel\") pod \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\" (UID: \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\") " Jan 29 12:00:38.367283 kubelet[3378]: I0129 12:00:38.365238 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/76a569f5-e2e9-49c0-a8e1-9d80860a0223-cilium-config-path\") pod \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\" (UID: \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\") " Jan 29 12:00:38.367283 kubelet[3378]: I0129 12:00:38.365264 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/76a569f5-e2e9-49c0-a8e1-9d80860a0223-cilium-cgroup\") pod \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\" (UID: \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\") " Jan 29 12:00:38.367283 kubelet[3378]: I0129 12:00:38.365287 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/76a569f5-e2e9-49c0-a8e1-9d80860a0223-hubble-tls\") pod \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\" (UID: \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\") " Jan 29 12:00:38.367566 kubelet[3378]: I0129 12:00:38.365343 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/76a569f5-e2e9-49c0-a8e1-9d80860a0223-clustermesh-secrets\") pod \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\" (UID: \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\") " Jan 29 12:00:38.367566 kubelet[3378]: I0129 12:00:38.365365 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/76a569f5-e2e9-49c0-a8e1-9d80860a0223-etc-cni-netd\") pod \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\" (UID: \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\") " Jan 29 12:00:38.367566 kubelet[3378]: I0129 12:00:38.365387 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/76a569f5-e2e9-49c0-a8e1-9d80860a0223-lib-modules\") pod \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\" (UID: \"76a569f5-e2e9-49c0-a8e1-9d80860a0223\") " Jan 29 12:00:38.367687 kubelet[3378]: I0129 12:00:38.365460 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76a569f5-e2e9-49c0-a8e1-9d80860a0223-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "76a569f5-e2e9-49c0-a8e1-9d80860a0223" (UID: "76a569f5-e2e9-49c0-a8e1-9d80860a0223"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:00:38.367734 kubelet[3378]: I0129 12:00:38.367690 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76a569f5-e2e9-49c0-a8e1-9d80860a0223-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "76a569f5-e2e9-49c0-a8e1-9d80860a0223" (UID: "76a569f5-e2e9-49c0-a8e1-9d80860a0223"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:00:38.367734 kubelet[3378]: I0129 12:00:38.367715 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76a569f5-e2e9-49c0-a8e1-9d80860a0223-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "76a569f5-e2e9-49c0-a8e1-9d80860a0223" (UID: "76a569f5-e2e9-49c0-a8e1-9d80860a0223"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:00:38.367845 kubelet[3378]: I0129 12:00:38.366734 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76a569f5-e2e9-49c0-a8e1-9d80860a0223-hostproc" (OuterVolumeSpecName: "hostproc") pod "76a569f5-e2e9-49c0-a8e1-9d80860a0223" (UID: "76a569f5-e2e9-49c0-a8e1-9d80860a0223"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:00:38.367933 kubelet[3378]: I0129 12:00:38.367917 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76a569f5-e2e9-49c0-a8e1-9d80860a0223-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "76a569f5-e2e9-49c0-a8e1-9d80860a0223" (UID: "76a569f5-e2e9-49c0-a8e1-9d80860a0223"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:00:38.368023 kubelet[3378]: I0129 12:00:38.367997 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76a569f5-e2e9-49c0-a8e1-9d80860a0223-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "76a569f5-e2e9-49c0-a8e1-9d80860a0223" (UID: "76a569f5-e2e9-49c0-a8e1-9d80860a0223"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:00:38.386957 kubelet[3378]: I0129 12:00:38.385995 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd0b4b26-c048-44be-a077-43b9f9d423ed-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cd0b4b26-c048-44be-a077-43b9f9d423ed" (UID: "cd0b4b26-c048-44be-a077-43b9f9d423ed"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:00:38.386957 kubelet[3378]: I0129 12:00:38.386073 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76a569f5-e2e9-49c0-a8e1-9d80860a0223-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "76a569f5-e2e9-49c0-a8e1-9d80860a0223" (UID: "76a569f5-e2e9-49c0-a8e1-9d80860a0223"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:00:38.386957 kubelet[3378]: I0129 12:00:38.386099 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76a569f5-e2e9-49c0-a8e1-9d80860a0223-cni-path" (OuterVolumeSpecName: "cni-path") pod "76a569f5-e2e9-49c0-a8e1-9d80860a0223" (UID: "76a569f5-e2e9-49c0-a8e1-9d80860a0223"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:00:38.391295 kubelet[3378]: I0129 12:00:38.391254 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76a569f5-e2e9-49c0-a8e1-9d80860a0223-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "76a569f5-e2e9-49c0-a8e1-9d80860a0223" (UID: "76a569f5-e2e9-49c0-a8e1-9d80860a0223"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:00:38.391511 kubelet[3378]: I0129 12:00:38.391492 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76a569f5-e2e9-49c0-a8e1-9d80860a0223-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "76a569f5-e2e9-49c0-a8e1-9d80860a0223" (UID: "76a569f5-e2e9-49c0-a8e1-9d80860a0223"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:00:38.393647 kubelet[3378]: I0129 12:00:38.393607 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76a569f5-e2e9-49c0-a8e1-9d80860a0223-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "76a569f5-e2e9-49c0-a8e1-9d80860a0223" (UID: "76a569f5-e2e9-49c0-a8e1-9d80860a0223"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:00:38.393769 kubelet[3378]: I0129 12:00:38.393658 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76a569f5-e2e9-49c0-a8e1-9d80860a0223-kube-api-access-q4dvj" (OuterVolumeSpecName: "kube-api-access-q4dvj") pod "76a569f5-e2e9-49c0-a8e1-9d80860a0223" (UID: "76a569f5-e2e9-49c0-a8e1-9d80860a0223"). InnerVolumeSpecName "kube-api-access-q4dvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:00:38.393769 kubelet[3378]: I0129 12:00:38.393689 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd0b4b26-c048-44be-a077-43b9f9d423ed-kube-api-access-fmmdc" (OuterVolumeSpecName: "kube-api-access-fmmdc") pod "cd0b4b26-c048-44be-a077-43b9f9d423ed" (UID: "cd0b4b26-c048-44be-a077-43b9f9d423ed"). InnerVolumeSpecName "kube-api-access-fmmdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:00:38.393769 kubelet[3378]: I0129 12:00:38.393704 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76a569f5-e2e9-49c0-a8e1-9d80860a0223-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "76a569f5-e2e9-49c0-a8e1-9d80860a0223" (UID: "76a569f5-e2e9-49c0-a8e1-9d80860a0223"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:00:38.394627 kubelet[3378]: I0129 12:00:38.394596 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76a569f5-e2e9-49c0-a8e1-9d80860a0223-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "76a569f5-e2e9-49c0-a8e1-9d80860a0223" (UID: "76a569f5-e2e9-49c0-a8e1-9d80860a0223"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:00:38.421109 kubelet[3378]: E0129 12:00:38.410691 3378 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 12:00:38.466361 kubelet[3378]: I0129 12:00:38.466179 3378 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/76a569f5-e2e9-49c0-a8e1-9d80860a0223-lib-modules\") on node \"ip-172-31-24-218\" DevicePath \"\"" Jan 29 12:00:38.466361 kubelet[3378]: I0129 12:00:38.466219 3378 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/76a569f5-e2e9-49c0-a8e1-9d80860a0223-hostproc\") on node \"ip-172-31-24-218\" DevicePath \"\"" Jan 29 12:00:38.466361 kubelet[3378]: I0129 12:00:38.466232 3378 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/76a569f5-e2e9-49c0-a8e1-9d80860a0223-bpf-maps\") on node \"ip-172-31-24-218\" DevicePath \"\"" Jan 29 12:00:38.466361 kubelet[3378]: I0129 12:00:38.466244 3378 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/76a569f5-e2e9-49c0-a8e1-9d80860a0223-xtables-lock\") on node \"ip-172-31-24-218\" DevicePath \"\"" Jan 29 12:00:38.466361 kubelet[3378]: I0129 12:00:38.466258 3378 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-q4dvj\" (UniqueName: \"kubernetes.io/projected/76a569f5-e2e9-49c0-a8e1-9d80860a0223-kube-api-access-q4dvj\") on node \"ip-172-31-24-218\" DevicePath \"\"" Jan 29 12:00:38.466361 kubelet[3378]: I0129 12:00:38.466269 3378 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/76a569f5-e2e9-49c0-a8e1-9d80860a0223-cilium-run\") on node \"ip-172-31-24-218\" DevicePath \"\"" Jan 29 12:00:38.466361 kubelet[3378]: I0129 12:00:38.466281 3378 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-fmmdc\" (UniqueName: \"kubernetes.io/projected/cd0b4b26-c048-44be-a077-43b9f9d423ed-kube-api-access-fmmdc\") on node \"ip-172-31-24-218\" DevicePath \"\"" Jan 29 12:00:38.466361 kubelet[3378]: I0129 12:00:38.466293 3378 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cd0b4b26-c048-44be-a077-43b9f9d423ed-cilium-config-path\") on node \"ip-172-31-24-218\" DevicePath \"\"" Jan 29 12:00:38.467152 kubelet[3378]: I0129 12:00:38.466305 3378 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/76a569f5-e2e9-49c0-a8e1-9d80860a0223-host-proc-sys-net\") on node \"ip-172-31-24-218\" DevicePath \"\"" Jan 29 12:00:38.467152 kubelet[3378]: I0129 12:00:38.466338 3378 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/76a569f5-e2e9-49c0-a8e1-9d80860a0223-cni-path\") on node \"ip-172-31-24-218\" DevicePath \"\"" Jan 29 12:00:38.467152 kubelet[3378]: I0129 12:00:38.466350 3378 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/76a569f5-e2e9-49c0-a8e1-9d80860a0223-host-proc-sys-kernel\") on node \"ip-172-31-24-218\" DevicePath \"\"" Jan 29 12:00:38.467152 kubelet[3378]: I0129 12:00:38.466362 3378 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/76a569f5-e2e9-49c0-a8e1-9d80860a0223-cilium-config-path\") on node \"ip-172-31-24-218\" DevicePath \"\"" Jan 29 12:00:38.467152 kubelet[3378]: I0129 12:00:38.466372 3378 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/76a569f5-e2e9-49c0-a8e1-9d80860a0223-clustermesh-secrets\") on node \"ip-172-31-24-218\" DevicePath \"\"" Jan 29 12:00:38.467152 kubelet[3378]: I0129 12:00:38.466384 3378 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/76a569f5-e2e9-49c0-a8e1-9d80860a0223-etc-cni-netd\") on node \"ip-172-31-24-218\" DevicePath \"\"" Jan 29 12:00:38.467152 kubelet[3378]: I0129 12:00:38.466395 3378 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/76a569f5-e2e9-49c0-a8e1-9d80860a0223-cilium-cgroup\") on node \"ip-172-31-24-218\" DevicePath \"\"" Jan 29 12:00:38.467152 kubelet[3378]: I0129 12:00:38.466406 3378 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/76a569f5-e2e9-49c0-a8e1-9d80860a0223-hubble-tls\") on node \"ip-172-31-24-218\" DevicePath \"\"" Jan 29 12:00:38.750230 kubelet[3378]: I0129 12:00:38.749634 3378 scope.go:117] "RemoveContainer" containerID="96d42cb1c8030470b43e92376fa5c246f34878104a9b1610ef8c3356b1b0eed1" Jan 29 12:00:38.763803 containerd[2099]: time="2025-01-29T12:00:38.763430116Z" level=info msg="RemoveContainer for \"96d42cb1c8030470b43e92376fa5c246f34878104a9b1610ef8c3356b1b0eed1\"" Jan 29 12:00:38.775593 containerd[2099]: time="2025-01-29T12:00:38.773436154Z" level=info msg="RemoveContainer for \"96d42cb1c8030470b43e92376fa5c246f34878104a9b1610ef8c3356b1b0eed1\" returns successfully" Jan 29 12:00:38.780490 kubelet[3378]: I0129 12:00:38.779521 3378 scope.go:117] "RemoveContainer" containerID="96d42cb1c8030470b43e92376fa5c246f34878104a9b1610ef8c3356b1b0eed1" Jan 29 12:00:38.811098 containerd[2099]: time="2025-01-29T12:00:38.780102476Z" level=error msg="ContainerStatus for \"96d42cb1c8030470b43e92376fa5c246f34878104a9b1610ef8c3356b1b0eed1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"96d42cb1c8030470b43e92376fa5c246f34878104a9b1610ef8c3356b1b0eed1\": not found" Jan 29 12:00:38.814913 kubelet[3378]: E0129 12:00:38.814868 3378 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"96d42cb1c8030470b43e92376fa5c246f34878104a9b1610ef8c3356b1b0eed1\": not found" containerID="96d42cb1c8030470b43e92376fa5c246f34878104a9b1610ef8c3356b1b0eed1" Jan 29 12:00:38.816181 kubelet[3378]: I0129 12:00:38.815193 3378 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"96d42cb1c8030470b43e92376fa5c246f34878104a9b1610ef8c3356b1b0eed1"} err="failed to get container status \"96d42cb1c8030470b43e92376fa5c246f34878104a9b1610ef8c3356b1b0eed1\": rpc error: code = NotFound desc = an error occurred when try to find container \"96d42cb1c8030470b43e92376fa5c246f34878104a9b1610ef8c3356b1b0eed1\": not found" Jan 29 12:00:38.816181 kubelet[3378]: I0129 12:00:38.815357 3378 scope.go:117] "RemoveContainer" containerID="3cb262c4a3ab15d0691c287038ccfb4c246814b0153f1b0cc3c5ab81a6b47916" Jan 29 12:00:38.818525 containerd[2099]: time="2025-01-29T12:00:38.818492308Z" level=info msg="RemoveContainer for \"3cb262c4a3ab15d0691c287038ccfb4c246814b0153f1b0cc3c5ab81a6b47916\"" Jan 29 12:00:38.822611 containerd[2099]: time="2025-01-29T12:00:38.822566574Z" level=info msg="RemoveContainer for \"3cb262c4a3ab15d0691c287038ccfb4c246814b0153f1b0cc3c5ab81a6b47916\" returns successfully" Jan 29 12:00:38.822807 kubelet[3378]: I0129 12:00:38.822782 3378 scope.go:117] "RemoveContainer" containerID="f0eb10f67b550c85f3758e10c736aac764c149c3a8220a8292967d687ae71e69" Jan 29 12:00:38.823766 containerd[2099]: time="2025-01-29T12:00:38.823736680Z" level=info msg="RemoveContainer for \"f0eb10f67b550c85f3758e10c736aac764c149c3a8220a8292967d687ae71e69\"" Jan 29 12:00:38.828235 containerd[2099]: time="2025-01-29T12:00:38.828196050Z" level=info msg="RemoveContainer for \"f0eb10f67b550c85f3758e10c736aac764c149c3a8220a8292967d687ae71e69\" returns successfully" Jan 29 12:00:38.828504 kubelet[3378]: I0129 12:00:38.828479 3378 scope.go:117] "RemoveContainer" containerID="710cc0f425b29cf734c309a17796304f16c103f5fa6f9502fa62f90721e9285e" Jan 29 12:00:38.831213 containerd[2099]: time="2025-01-29T12:00:38.830431816Z" level=info msg="RemoveContainer for \"710cc0f425b29cf734c309a17796304f16c103f5fa6f9502fa62f90721e9285e\"" Jan 29 12:00:38.833849 containerd[2099]: time="2025-01-29T12:00:38.833814386Z" level=info msg="RemoveContainer for \"710cc0f425b29cf734c309a17796304f16c103f5fa6f9502fa62f90721e9285e\" returns successfully" Jan 29 12:00:38.834095 kubelet[3378]: I0129 12:00:38.834069 3378 scope.go:117] "RemoveContainer" containerID="bdfea5c23aff25155352fa4fb555a6cc3f336a602e1960baa896f912c1cde374" Jan 29 12:00:38.835187 containerd[2099]: time="2025-01-29T12:00:38.835155119Z" level=info msg="RemoveContainer for \"bdfea5c23aff25155352fa4fb555a6cc3f336a602e1960baa896f912c1cde374\"" Jan 29 12:00:38.838006 containerd[2099]: time="2025-01-29T12:00:38.837971231Z" level=info msg="RemoveContainer for \"bdfea5c23aff25155352fa4fb555a6cc3f336a602e1960baa896f912c1cde374\" returns successfully" Jan 29 12:00:38.838152 kubelet[3378]: I0129 12:00:38.838126 3378 scope.go:117] "RemoveContainer" containerID="42c9848721a977e5b2e286c8adb48166c5f7d5da711e401a8a96bfc11ac06dee" Jan 29 12:00:38.839308 containerd[2099]: time="2025-01-29T12:00:38.839284007Z" level=info msg="RemoveContainer for \"42c9848721a977e5b2e286c8adb48166c5f7d5da711e401a8a96bfc11ac06dee\"" Jan 29 12:00:38.842797 containerd[2099]: time="2025-01-29T12:00:38.842712102Z" level=info msg="RemoveContainer for \"42c9848721a977e5b2e286c8adb48166c5f7d5da711e401a8a96bfc11ac06dee\" returns successfully" Jan 29 12:00:38.843041 kubelet[3378]: I0129 12:00:38.842916 3378 scope.go:117] "RemoveContainer" containerID="3cb262c4a3ab15d0691c287038ccfb4c246814b0153f1b0cc3c5ab81a6b47916" Jan 29 12:00:38.843280 containerd[2099]: time="2025-01-29T12:00:38.843220324Z" level=error msg="ContainerStatus for \"3cb262c4a3ab15d0691c287038ccfb4c246814b0153f1b0cc3c5ab81a6b47916\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3cb262c4a3ab15d0691c287038ccfb4c246814b0153f1b0cc3c5ab81a6b47916\": not found" Jan 29 12:00:38.843394 kubelet[3378]: E0129 12:00:38.843357 3378 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3cb262c4a3ab15d0691c287038ccfb4c246814b0153f1b0cc3c5ab81a6b47916\": not found" containerID="3cb262c4a3ab15d0691c287038ccfb4c246814b0153f1b0cc3c5ab81a6b47916" Jan 29 12:00:38.843459 kubelet[3378]: I0129 12:00:38.843390 3378 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3cb262c4a3ab15d0691c287038ccfb4c246814b0153f1b0cc3c5ab81a6b47916"} err="failed to get container status \"3cb262c4a3ab15d0691c287038ccfb4c246814b0153f1b0cc3c5ab81a6b47916\": rpc error: code = NotFound desc = an error occurred when try to find container \"3cb262c4a3ab15d0691c287038ccfb4c246814b0153f1b0cc3c5ab81a6b47916\": not found" Jan 29 12:00:38.843459 kubelet[3378]: I0129 12:00:38.843427 3378 scope.go:117] "RemoveContainer" containerID="f0eb10f67b550c85f3758e10c736aac764c149c3a8220a8292967d687ae71e69" Jan 29 12:00:38.843654 containerd[2099]: time="2025-01-29T12:00:38.843622307Z" level=error msg="ContainerStatus for \"f0eb10f67b550c85f3758e10c736aac764c149c3a8220a8292967d687ae71e69\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f0eb10f67b550c85f3758e10c736aac764c149c3a8220a8292967d687ae71e69\": not found" Jan 29 12:00:38.843816 kubelet[3378]: E0129 12:00:38.843789 3378 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f0eb10f67b550c85f3758e10c736aac764c149c3a8220a8292967d687ae71e69\": not found" containerID="f0eb10f67b550c85f3758e10c736aac764c149c3a8220a8292967d687ae71e69" Jan 29 12:00:38.843875 kubelet[3378]: I0129 12:00:38.843818 3378 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f0eb10f67b550c85f3758e10c736aac764c149c3a8220a8292967d687ae71e69"} err="failed to get container status \"f0eb10f67b550c85f3758e10c736aac764c149c3a8220a8292967d687ae71e69\": rpc error: code = NotFound desc = an error occurred when try to find container \"f0eb10f67b550c85f3758e10c736aac764c149c3a8220a8292967d687ae71e69\": not found" Jan 29 12:00:38.843875 kubelet[3378]: I0129 12:00:38.843864 3378 scope.go:117] "RemoveContainer" containerID="710cc0f425b29cf734c309a17796304f16c103f5fa6f9502fa62f90721e9285e" Jan 29 12:00:38.844153 containerd[2099]: time="2025-01-29T12:00:38.844025585Z" level=error msg="ContainerStatus for \"710cc0f425b29cf734c309a17796304f16c103f5fa6f9502fa62f90721e9285e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"710cc0f425b29cf734c309a17796304f16c103f5fa6f9502fa62f90721e9285e\": not found" Jan 29 12:00:38.844416 kubelet[3378]: E0129 12:00:38.844246 3378 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"710cc0f425b29cf734c309a17796304f16c103f5fa6f9502fa62f90721e9285e\": not found" containerID="710cc0f425b29cf734c309a17796304f16c103f5fa6f9502fa62f90721e9285e" Jan 29 12:00:38.844416 kubelet[3378]: I0129 12:00:38.844263 3378 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"710cc0f425b29cf734c309a17796304f16c103f5fa6f9502fa62f90721e9285e"} err="failed to get container status \"710cc0f425b29cf734c309a17796304f16c103f5fa6f9502fa62f90721e9285e\": rpc error: code = NotFound desc = an error occurred when try to find container \"710cc0f425b29cf734c309a17796304f16c103f5fa6f9502fa62f90721e9285e\": not found" Jan 29 12:00:38.844416 kubelet[3378]: I0129 12:00:38.844282 3378 scope.go:117] "RemoveContainer" containerID="bdfea5c23aff25155352fa4fb555a6cc3f336a602e1960baa896f912c1cde374" Jan 29 12:00:38.844773 containerd[2099]: time="2025-01-29T12:00:38.844741433Z" level=error msg="ContainerStatus for \"bdfea5c23aff25155352fa4fb555a6cc3f336a602e1960baa896f912c1cde374\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bdfea5c23aff25155352fa4fb555a6cc3f336a602e1960baa896f912c1cde374\": not found" Jan 29 12:00:38.844896 kubelet[3378]: E0129 12:00:38.844871 3378 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bdfea5c23aff25155352fa4fb555a6cc3f336a602e1960baa896f912c1cde374\": not found" containerID="bdfea5c23aff25155352fa4fb555a6cc3f336a602e1960baa896f912c1cde374" Jan 29 12:00:38.844967 kubelet[3378]: I0129 12:00:38.844924 3378 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bdfea5c23aff25155352fa4fb555a6cc3f336a602e1960baa896f912c1cde374"} err="failed to get container status \"bdfea5c23aff25155352fa4fb555a6cc3f336a602e1960baa896f912c1cde374\": rpc error: code = NotFound desc = an error occurred when try to find container \"bdfea5c23aff25155352fa4fb555a6cc3f336a602e1960baa896f912c1cde374\": not found" Jan 29 12:00:38.844967 kubelet[3378]: I0129 12:00:38.844944 3378 scope.go:117] "RemoveContainer" containerID="42c9848721a977e5b2e286c8adb48166c5f7d5da711e401a8a96bfc11ac06dee" Jan 29 12:00:38.845148 containerd[2099]: time="2025-01-29T12:00:38.845106890Z" level=error msg="ContainerStatus for \"42c9848721a977e5b2e286c8adb48166c5f7d5da711e401a8a96bfc11ac06dee\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"42c9848721a977e5b2e286c8adb48166c5f7d5da711e401a8a96bfc11ac06dee\": not found" Jan 29 12:00:38.845232 kubelet[3378]: E0129 12:00:38.845213 3378 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"42c9848721a977e5b2e286c8adb48166c5f7d5da711e401a8a96bfc11ac06dee\": not found" containerID="42c9848721a977e5b2e286c8adb48166c5f7d5da711e401a8a96bfc11ac06dee" Jan 29 12:00:38.845288 kubelet[3378]: I0129 12:00:38.845239 3378 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"42c9848721a977e5b2e286c8adb48166c5f7d5da711e401a8a96bfc11ac06dee"} err="failed to get container status \"42c9848721a977e5b2e286c8adb48166c5f7d5da711e401a8a96bfc11ac06dee\": rpc error: code = NotFound desc = an error occurred when try to find container \"42c9848721a977e5b2e286c8adb48166c5f7d5da711e401a8a96bfc11ac06dee\": not found" Jan 29 12:00:38.874503 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-339f17df78eff15909c00d4eeae5d9d5ff19a0bbfd3490c04fa4957b605dbdea-rootfs.mount: Deactivated successfully. Jan 29 12:00:38.874688 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1afaa95969e1279a966f28d82f32623e600d23ad2b85640e9c5627bf3c7fd5ed-rootfs.mount: Deactivated successfully. Jan 29 12:00:38.874823 systemd[1]: var-lib-kubelet-pods-76a569f5\x2de2e9\x2d49c0\x2da8e1\x2d9d80860a0223-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq4dvj.mount: Deactivated successfully. Jan 29 12:00:38.874957 systemd[1]: var-lib-kubelet-pods-cd0b4b26\x2dc048\x2d44be\x2da077\x2d43b9f9d423ed-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfmmdc.mount: Deactivated successfully. Jan 29 12:00:38.875093 systemd[1]: var-lib-kubelet-pods-76a569f5\x2de2e9\x2d49c0\x2da8e1\x2d9d80860a0223-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 12:00:38.875227 systemd[1]: var-lib-kubelet-pods-76a569f5\x2de2e9\x2d49c0\x2da8e1\x2d9d80860a0223-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 12:00:39.695659 sshd[5064]: pam_unix(sshd:session): session closed for user core Jan 29 12:00:39.699478 systemd[1]: sshd@23-172.31.24.218:22-139.178.68.195:51010.service: Deactivated successfully. Jan 29 12:00:39.704982 systemd-logind[2069]: Session 24 logged out. Waiting for processes to exit. Jan 29 12:00:39.705717 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 12:00:39.707754 systemd-logind[2069]: Removed session 24. Jan 29 12:00:39.721633 systemd[1]: Started sshd@24-172.31.24.218:22-139.178.68.195:51018.service - OpenSSH per-connection server daemon (139.178.68.195:51018). Jan 29 12:00:39.890383 sshd[5232]: Accepted publickey for core from 139.178.68.195 port 51018 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:00:39.892019 sshd[5232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:39.897650 systemd-logind[2069]: New session 25 of user core. Jan 29 12:00:39.902784 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 12:00:40.231152 kubelet[3378]: E0129 12:00:40.229530 3378 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-2p4ch" podUID="ccdc9d4e-3dec-4c2c-8805-db5fbcb28da8" Jan 29 12:00:40.232216 kubelet[3378]: I0129 12:00:40.232131 3378 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76a569f5-e2e9-49c0-a8e1-9d80860a0223" path="/var/lib/kubelet/pods/76a569f5-e2e9-49c0-a8e1-9d80860a0223/volumes" Jan 29 12:00:40.234401 kubelet[3378]: I0129 12:00:40.233958 3378 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd0b4b26-c048-44be-a077-43b9f9d423ed" path="/var/lib/kubelet/pods/cd0b4b26-c048-44be-a077-43b9f9d423ed/volumes" Jan 29 12:00:40.408170 kubelet[3378]: I0129 12:00:40.408078 3378 setters.go:580] "Node became not ready" node="ip-172-31-24-218" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-29T12:00:40Z","lastTransitionTime":"2025-01-29T12:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 29 12:00:40.630423 ntpd[2047]: Deleting interface #10 lxc_health, fe80::fc18:8dff:fe89:88c4%8#123, interface stats: received=0, sent=0, dropped=0, active_time=67 secs Jan 29 12:00:40.631782 ntpd[2047]: 29 Jan 12:00:40 ntpd[2047]: Deleting interface #10 lxc_health, fe80::fc18:8dff:fe89:88c4%8#123, interface stats: received=0, sent=0, dropped=0, active_time=67 secs Jan 29 12:00:40.984667 sshd[5232]: pam_unix(sshd:session): session closed for user core Jan 29 12:00:40.997133 systemd-logind[2069]: Session 25 logged out. Waiting for processes to exit. Jan 29 12:00:40.998564 systemd[1]: sshd@24-172.31.24.218:22-139.178.68.195:51018.service: Deactivated successfully. Jan 29 12:00:41.024761 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 12:00:41.044717 systemd[1]: Started sshd@25-172.31.24.218:22-139.178.68.195:51020.service - OpenSSH per-connection server daemon (139.178.68.195:51020). Jan 29 12:00:41.053862 kubelet[3378]: I0129 12:00:41.037832 3378 topology_manager.go:215] "Topology Admit Handler" podUID="2553f758-7be0-41ff-b132-173573b81f03" podNamespace="kube-system" podName="cilium-8wr9f" Jan 29 12:00:41.054832 systemd-logind[2069]: Removed session 25. Jan 29 12:00:41.070637 kubelet[3378]: E0129 12:00:41.070601 3378 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="76a569f5-e2e9-49c0-a8e1-9d80860a0223" containerName="apply-sysctl-overwrites" Jan 29 12:00:41.072109 kubelet[3378]: E0129 12:00:41.071955 3378 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cd0b4b26-c048-44be-a077-43b9f9d423ed" containerName="cilium-operator" Jan 29 12:00:41.072109 kubelet[3378]: E0129 12:00:41.071980 3378 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="76a569f5-e2e9-49c0-a8e1-9d80860a0223" containerName="cilium-agent" Jan 29 12:00:41.072109 kubelet[3378]: E0129 12:00:41.071991 3378 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="76a569f5-e2e9-49c0-a8e1-9d80860a0223" containerName="mount-bpf-fs" Jan 29 12:00:41.072109 kubelet[3378]: E0129 12:00:41.071999 3378 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="76a569f5-e2e9-49c0-a8e1-9d80860a0223" containerName="clean-cilium-state" Jan 29 12:00:41.072109 kubelet[3378]: E0129 12:00:41.072008 3378 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="76a569f5-e2e9-49c0-a8e1-9d80860a0223" containerName="mount-cgroup" Jan 29 12:00:41.089667 kubelet[3378]: I0129 12:00:41.072061 3378 memory_manager.go:354] "RemoveStaleState removing state" podUID="76a569f5-e2e9-49c0-a8e1-9d80860a0223" containerName="cilium-agent" Jan 29 12:00:41.089667 kubelet[3378]: I0129 12:00:41.089523 3378 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd0b4b26-c048-44be-a077-43b9f9d423ed" containerName="cilium-operator" Jan 29 12:00:41.295511 sshd[5245]: Accepted publickey for core from 139.178.68.195 port 51020 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:00:41.297436 kubelet[3378]: I0129 12:00:41.296896 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2553f758-7be0-41ff-b132-173573b81f03-clustermesh-secrets\") pod \"cilium-8wr9f\" (UID: \"2553f758-7be0-41ff-b132-173573b81f03\") " pod="kube-system/cilium-8wr9f" Jan 29 12:00:41.297436 kubelet[3378]: I0129 12:00:41.297000 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2553f758-7be0-41ff-b132-173573b81f03-etc-cni-netd\") pod \"cilium-8wr9f\" (UID: \"2553f758-7be0-41ff-b132-173573b81f03\") " pod="kube-system/cilium-8wr9f" Jan 29 12:00:41.297436 kubelet[3378]: I0129 12:00:41.297028 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2553f758-7be0-41ff-b132-173573b81f03-xtables-lock\") pod \"cilium-8wr9f\" (UID: \"2553f758-7be0-41ff-b132-173573b81f03\") " pod="kube-system/cilium-8wr9f" Jan 29 12:00:41.297436 kubelet[3378]: I0129 12:00:41.297053 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2553f758-7be0-41ff-b132-173573b81f03-cilium-cgroup\") pod \"cilium-8wr9f\" (UID: \"2553f758-7be0-41ff-b132-173573b81f03\") " pod="kube-system/cilium-8wr9f" Jan 29 12:00:41.297436 kubelet[3378]: I0129 12:00:41.297074 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2553f758-7be0-41ff-b132-173573b81f03-cilium-run\") pod \"cilium-8wr9f\" (UID: \"2553f758-7be0-41ff-b132-173573b81f03\") " pod="kube-system/cilium-8wr9f" Jan 29 12:00:41.297436 kubelet[3378]: I0129 12:00:41.297098 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2553f758-7be0-41ff-b132-173573b81f03-hubble-tls\") pod \"cilium-8wr9f\" (UID: \"2553f758-7be0-41ff-b132-173573b81f03\") " pod="kube-system/cilium-8wr9f" Jan 29 12:00:41.298246 kubelet[3378]: I0129 12:00:41.297121 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnhjl\" (UniqueName: \"kubernetes.io/projected/2553f758-7be0-41ff-b132-173573b81f03-kube-api-access-vnhjl\") pod \"cilium-8wr9f\" (UID: \"2553f758-7be0-41ff-b132-173573b81f03\") " pod="kube-system/cilium-8wr9f" Jan 29 12:00:41.298246 kubelet[3378]: I0129 12:00:41.297148 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2553f758-7be0-41ff-b132-173573b81f03-cni-path\") pod \"cilium-8wr9f\" (UID: \"2553f758-7be0-41ff-b132-173573b81f03\") " pod="kube-system/cilium-8wr9f" Jan 29 12:00:41.298246 kubelet[3378]: I0129 12:00:41.297170 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2553f758-7be0-41ff-b132-173573b81f03-cilium-ipsec-secrets\") pod \"cilium-8wr9f\" (UID: \"2553f758-7be0-41ff-b132-173573b81f03\") " pod="kube-system/cilium-8wr9f" Jan 29 12:00:41.298246 kubelet[3378]: I0129 12:00:41.297190 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2553f758-7be0-41ff-b132-173573b81f03-host-proc-sys-net\") pod \"cilium-8wr9f\" (UID: \"2553f758-7be0-41ff-b132-173573b81f03\") " pod="kube-system/cilium-8wr9f" Jan 29 12:00:41.298246 kubelet[3378]: I0129 12:00:41.297225 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2553f758-7be0-41ff-b132-173573b81f03-lib-modules\") pod \"cilium-8wr9f\" (UID: \"2553f758-7be0-41ff-b132-173573b81f03\") " pod="kube-system/cilium-8wr9f" Jan 29 12:00:41.298246 kubelet[3378]: I0129 12:00:41.297253 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2553f758-7be0-41ff-b132-173573b81f03-hostproc\") pod \"cilium-8wr9f\" (UID: \"2553f758-7be0-41ff-b132-173573b81f03\") " pod="kube-system/cilium-8wr9f" Jan 29 12:00:41.298544 kubelet[3378]: I0129 12:00:41.297273 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2553f758-7be0-41ff-b132-173573b81f03-cilium-config-path\") pod \"cilium-8wr9f\" (UID: \"2553f758-7be0-41ff-b132-173573b81f03\") " pod="kube-system/cilium-8wr9f" Jan 29 12:00:41.298544 kubelet[3378]: I0129 12:00:41.297295 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2553f758-7be0-41ff-b132-173573b81f03-host-proc-sys-kernel\") pod \"cilium-8wr9f\" (UID: \"2553f758-7be0-41ff-b132-173573b81f03\") " pod="kube-system/cilium-8wr9f" Jan 29 12:00:41.298544 kubelet[3378]: I0129 12:00:41.297328 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2553f758-7be0-41ff-b132-173573b81f03-bpf-maps\") pod \"cilium-8wr9f\" (UID: \"2553f758-7be0-41ff-b132-173573b81f03\") " pod="kube-system/cilium-8wr9f" Jan 29 12:00:41.298505 sshd[5245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:41.304803 systemd-logind[2069]: New session 26 of user core. Jan 29 12:00:41.310673 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 12:00:41.434515 sshd[5245]: pam_unix(sshd:session): session closed for user core Jan 29 12:00:41.449516 containerd[2099]: time="2025-01-29T12:00:41.449481054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8wr9f,Uid:2553f758-7be0-41ff-b132-173573b81f03,Namespace:kube-system,Attempt:0,}" Jan 29 12:00:41.457970 systemd[1]: sshd@25-172.31.24.218:22-139.178.68.195:51020.service: Deactivated successfully. Jan 29 12:00:41.472803 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 12:00:41.482070 systemd-logind[2069]: Session 26 logged out. Waiting for processes to exit. Jan 29 12:00:41.494659 systemd[1]: Started sshd@26-172.31.24.218:22-139.178.68.195:51032.service - OpenSSH per-connection server daemon (139.178.68.195:51032). Jan 29 12:00:41.500260 systemd-logind[2069]: Removed session 26. Jan 29 12:00:41.504136 containerd[2099]: time="2025-01-29T12:00:41.503791671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:00:41.504136 containerd[2099]: time="2025-01-29T12:00:41.503956090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:00:41.504136 containerd[2099]: time="2025-01-29T12:00:41.503991505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:00:41.504382 containerd[2099]: time="2025-01-29T12:00:41.504233045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:00:41.573703 containerd[2099]: time="2025-01-29T12:00:41.573594454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8wr9f,Uid:2553f758-7be0-41ff-b132-173573b81f03,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6b30c72a90c9caf784d5995c8ba39acdd9f31f01d62feba000a83b87ab80f3c\"" Jan 29 12:00:41.582091 containerd[2099]: time="2025-01-29T12:00:41.581950219Z" level=info msg="CreateContainer within sandbox \"e6b30c72a90c9caf784d5995c8ba39acdd9f31f01d62feba000a83b87ab80f3c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 12:00:41.596862 containerd[2099]: time="2025-01-29T12:00:41.596715939Z" level=info msg="CreateContainer within sandbox \"e6b30c72a90c9caf784d5995c8ba39acdd9f31f01d62feba000a83b87ab80f3c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ae6f506ecba4e003d37228244f7d3f1b5ac462b39b3f85dc91ee19e44062dcd2\"" Jan 29 12:00:41.599054 containerd[2099]: time="2025-01-29T12:00:41.597976081Z" level=info msg="StartContainer for \"ae6f506ecba4e003d37228244f7d3f1b5ac462b39b3f85dc91ee19e44062dcd2\"" Jan 29 12:00:41.668732 containerd[2099]: time="2025-01-29T12:00:41.668589334Z" level=info msg="StartContainer for \"ae6f506ecba4e003d37228244f7d3f1b5ac462b39b3f85dc91ee19e44062dcd2\" returns successfully" Jan 29 12:00:41.693661 sshd[5271]: Accepted publickey for core from 139.178.68.195 port 51032 ssh2: RSA SHA256:S/Ljdvuj5tG5WfwgQVlG9VyLk42AZOHecSxk7w6NUXs Jan 29 12:00:41.695675 sshd[5271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:41.709337 systemd-logind[2069]: New session 27 of user core. Jan 29 12:00:41.713091 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 29 12:00:41.760499 containerd[2099]: time="2025-01-29T12:00:41.760430042Z" level=info msg="shim disconnected" id=ae6f506ecba4e003d37228244f7d3f1b5ac462b39b3f85dc91ee19e44062dcd2 namespace=k8s.io Jan 29 12:00:41.760499 containerd[2099]: time="2025-01-29T12:00:41.760482735Z" level=warning msg="cleaning up after shim disconnected" id=ae6f506ecba4e003d37228244f7d3f1b5ac462b39b3f85dc91ee19e44062dcd2 namespace=k8s.io Jan 29 12:00:41.760499 containerd[2099]: time="2025-01-29T12:00:41.760495224Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:00:41.787443 containerd[2099]: time="2025-01-29T12:00:41.787405469Z" level=info msg="CreateContainer within sandbox \"e6b30c72a90c9caf784d5995c8ba39acdd9f31f01d62feba000a83b87ab80f3c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 12:00:41.802386 containerd[2099]: time="2025-01-29T12:00:41.802338991Z" level=info msg="CreateContainer within sandbox \"e6b30c72a90c9caf784d5995c8ba39acdd9f31f01d62feba000a83b87ab80f3c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6827aa0e756173d4b4c7faf331d063cb057631149188fdcc40c1dcbb116e3ecb\"" Jan 29 12:00:41.804581 containerd[2099]: time="2025-01-29T12:00:41.804527290Z" level=info msg="StartContainer for \"6827aa0e756173d4b4c7faf331d063cb057631149188fdcc40c1dcbb116e3ecb\"" Jan 29 12:00:41.915063 containerd[2099]: time="2025-01-29T12:00:41.914955231Z" level=info msg="StartContainer for \"6827aa0e756173d4b4c7faf331d063cb057631149188fdcc40c1dcbb116e3ecb\" returns successfully" Jan 29 12:00:41.963370 containerd[2099]: time="2025-01-29T12:00:41.963001691Z" level=info msg="shim disconnected" id=6827aa0e756173d4b4c7faf331d063cb057631149188fdcc40c1dcbb116e3ecb namespace=k8s.io Jan 29 12:00:41.963370 containerd[2099]: time="2025-01-29T12:00:41.963073378Z" level=warning msg="cleaning up after shim disconnected" id=6827aa0e756173d4b4c7faf331d063cb057631149188fdcc40c1dcbb116e3ecb namespace=k8s.io Jan 29 12:00:41.963370 containerd[2099]: time="2025-01-29T12:00:41.963086425Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:00:42.225997 kubelet[3378]: E0129 12:00:42.225265 3378 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-2p4ch" podUID="ccdc9d4e-3dec-4c2c-8805-db5fbcb28da8" Jan 29 12:00:42.795336 containerd[2099]: time="2025-01-29T12:00:42.792569303Z" level=info msg="CreateContainer within sandbox \"e6b30c72a90c9caf784d5995c8ba39acdd9f31f01d62feba000a83b87ab80f3c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 12:00:42.877203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount265504647.mount: Deactivated successfully. Jan 29 12:00:42.892963 containerd[2099]: time="2025-01-29T12:00:42.891178217Z" level=info msg="CreateContainer within sandbox \"e6b30c72a90c9caf784d5995c8ba39acdd9f31f01d62feba000a83b87ab80f3c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8e8e9d935c42a42da18d3b6fb2dd971b34c54284b2e970de900323da4302a34d\"" Jan 29 12:00:42.894025 containerd[2099]: time="2025-01-29T12:00:42.893987636Z" level=info msg="StartContainer for \"8e8e9d935c42a42da18d3b6fb2dd971b34c54284b2e970de900323da4302a34d\"" Jan 29 12:00:43.040681 containerd[2099]: time="2025-01-29T12:00:43.040637900Z" level=info msg="StartContainer for \"8e8e9d935c42a42da18d3b6fb2dd971b34c54284b2e970de900323da4302a34d\" returns successfully" Jan 29 12:00:43.086162 containerd[2099]: time="2025-01-29T12:00:43.085959104Z" level=info msg="shim disconnected" id=8e8e9d935c42a42da18d3b6fb2dd971b34c54284b2e970de900323da4302a34d namespace=k8s.io Jan 29 12:00:43.086162 containerd[2099]: time="2025-01-29T12:00:43.086027052Z" level=warning msg="cleaning up after shim disconnected" id=8e8e9d935c42a42da18d3b6fb2dd971b34c54284b2e970de900323da4302a34d namespace=k8s.io Jan 29 12:00:43.086162 containerd[2099]: time="2025-01-29T12:00:43.086040413Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:00:43.418253 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e8e9d935c42a42da18d3b6fb2dd971b34c54284b2e970de900323da4302a34d-rootfs.mount: Deactivated successfully. Jan 29 12:00:43.422589 kubelet[3378]: E0129 12:00:43.422552 3378 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 12:00:43.798346 containerd[2099]: time="2025-01-29T12:00:43.798157642Z" level=info msg="CreateContainer within sandbox \"e6b30c72a90c9caf784d5995c8ba39acdd9f31f01d62feba000a83b87ab80f3c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 12:00:43.840233 containerd[2099]: time="2025-01-29T12:00:43.838286373Z" level=info msg="CreateContainer within sandbox \"e6b30c72a90c9caf784d5995c8ba39acdd9f31f01d62feba000a83b87ab80f3c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7219fb88549143a2195c8cb2a011e088663695b3dd8d0e4b5e9b1c738147ede9\"" Jan 29 12:00:43.841933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1717515147.mount: Deactivated successfully. Jan 29 12:00:43.843767 containerd[2099]: time="2025-01-29T12:00:43.842904245Z" level=info msg="StartContainer for \"7219fb88549143a2195c8cb2a011e088663695b3dd8d0e4b5e9b1c738147ede9\"" Jan 29 12:00:43.936712 containerd[2099]: time="2025-01-29T12:00:43.936623398Z" level=info msg="StartContainer for \"7219fb88549143a2195c8cb2a011e088663695b3dd8d0e4b5e9b1c738147ede9\" returns successfully" Jan 29 12:00:43.959969 containerd[2099]: time="2025-01-29T12:00:43.959902658Z" level=info msg="shim disconnected" id=7219fb88549143a2195c8cb2a011e088663695b3dd8d0e4b5e9b1c738147ede9 namespace=k8s.io Jan 29 12:00:43.959969 containerd[2099]: time="2025-01-29T12:00:43.959963918Z" level=warning msg="cleaning up after shim disconnected" id=7219fb88549143a2195c8cb2a011e088663695b3dd8d0e4b5e9b1c738147ede9 namespace=k8s.io Jan 29 12:00:43.959969 containerd[2099]: time="2025-01-29T12:00:43.959976375Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:00:44.226998 kubelet[3378]: E0129 12:00:44.225371 3378 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-2p4ch" podUID="ccdc9d4e-3dec-4c2c-8805-db5fbcb28da8" Jan 29 12:00:44.419799 systemd[1]: run-containerd-runc-k8s.io-7219fb88549143a2195c8cb2a011e088663695b3dd8d0e4b5e9b1c738147ede9-runc.nx7XfB.mount: Deactivated successfully. Jan 29 12:00:44.420103 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7219fb88549143a2195c8cb2a011e088663695b3dd8d0e4b5e9b1c738147ede9-rootfs.mount: Deactivated successfully. Jan 29 12:00:44.801407 containerd[2099]: time="2025-01-29T12:00:44.801327541Z" level=info msg="CreateContainer within sandbox \"e6b30c72a90c9caf784d5995c8ba39acdd9f31f01d62feba000a83b87ab80f3c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 12:00:44.875594 containerd[2099]: time="2025-01-29T12:00:44.873611281Z" level=info msg="CreateContainer within sandbox \"e6b30c72a90c9caf784d5995c8ba39acdd9f31f01d62feba000a83b87ab80f3c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b2de2a4dfa22908bcf9085740696b4ba1c1b0d41da37739ec8348273b12bdfe9\"" Jan 29 12:00:44.877416 containerd[2099]: time="2025-01-29T12:00:44.877381964Z" level=info msg="StartContainer for \"b2de2a4dfa22908bcf9085740696b4ba1c1b0d41da37739ec8348273b12bdfe9\"" Jan 29 12:00:44.959411 containerd[2099]: time="2025-01-29T12:00:44.959361581Z" level=info msg="StartContainer for \"b2de2a4dfa22908bcf9085740696b4ba1c1b0d41da37739ec8348273b12bdfe9\" returns successfully" Jan 29 12:00:45.761552 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 29 12:00:46.228153 kubelet[3378]: E0129 12:00:46.227155 3378 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-2p4ch" podUID="ccdc9d4e-3dec-4c2c-8805-db5fbcb28da8" Jan 29 12:00:48.226913 kubelet[3378]: E0129 12:00:48.226497 3378 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-2p4ch" podUID="ccdc9d4e-3dec-4c2c-8805-db5fbcb28da8" Jan 29 12:00:48.803625 systemd-networkd[1652]: lxc_health: Link UP Jan 29 12:00:48.815262 systemd-networkd[1652]: lxc_health: Gained carrier Jan 29 12:00:48.818546 (udev-worker)[6139]: Network interface NamePolicy= disabled on kernel command line. Jan 29 12:00:49.494938 kubelet[3378]: I0129 12:00:49.494869 3378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8wr9f" podStartSLOduration=9.494846723 podStartE2EDuration="9.494846723s" podCreationTimestamp="2025-01-29 12:00:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:00:45.838551033 +0000 UTC m=+107.874553066" watchObservedRunningTime="2025-01-29 12:00:49.494846723 +0000 UTC m=+111.530848756" Jan 29 12:00:50.197461 systemd-networkd[1652]: lxc_health: Gained IPv6LL Jan 29 12:00:52.630175 ntpd[2047]: Listen normally on 13 lxc_health [fe80::a06c:ebff:fede:932c%14]:123 Jan 29 12:00:52.630687 ntpd[2047]: 29 Jan 12:00:52 ntpd[2047]: Listen normally on 13 lxc_health [fe80::a06c:ebff:fede:932c%14]:123 Jan 29 12:00:55.560251 systemd[1]: run-containerd-runc-k8s.io-b2de2a4dfa22908bcf9085740696b4ba1c1b0d41da37739ec8348273b12bdfe9-runc.aygyUP.mount: Deactivated successfully. Jan 29 12:00:55.612706 kubelet[3378]: E0129 12:00:55.612642 3378 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:58832->127.0.0.1:46055: read tcp 127.0.0.1:58832->127.0.0.1:46055: read: connection reset by peer Jan 29 12:00:55.614474 kubelet[3378]: E0129 12:00:55.614445 3378 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:58832->127.0.0.1:46055: write tcp 127.0.0.1:58832->127.0.0.1:46055: write: broken pipe Jan 29 12:00:55.637145 sshd[5271]: pam_unix(sshd:session): session closed for user core Jan 29 12:00:55.641957 systemd-logind[2069]: Session 27 logged out. Waiting for processes to exit. Jan 29 12:00:55.644095 systemd[1]: sshd@26-172.31.24.218:22-139.178.68.195:51032.service: Deactivated successfully. Jan 29 12:00:55.650426 systemd[1]: session-27.scope: Deactivated successfully. Jan 29 12:00:55.651152 systemd-logind[2069]: Removed session 27. Jan 29 12:00:58.219903 containerd[2099]: time="2025-01-29T12:00:58.219854477Z" level=info msg="StopPodSandbox for \"1afaa95969e1279a966f28d82f32623e600d23ad2b85640e9c5627bf3c7fd5ed\"" Jan 29 12:00:58.220397 containerd[2099]: time="2025-01-29T12:00:58.219971680Z" level=info msg="TearDown network for sandbox \"1afaa95969e1279a966f28d82f32623e600d23ad2b85640e9c5627bf3c7fd5ed\" successfully" Jan 29 12:00:58.220397 containerd[2099]: time="2025-01-29T12:00:58.219987536Z" level=info msg="StopPodSandbox for \"1afaa95969e1279a966f28d82f32623e600d23ad2b85640e9c5627bf3c7fd5ed\" returns successfully" Jan 29 12:00:58.220877 containerd[2099]: time="2025-01-29T12:00:58.220844492Z" level=info msg="RemovePodSandbox for \"1afaa95969e1279a966f28d82f32623e600d23ad2b85640e9c5627bf3c7fd5ed\"" Jan 29 12:00:58.225879 containerd[2099]: time="2025-01-29T12:00:58.225813915Z" level=info msg="Forcibly stopping sandbox \"1afaa95969e1279a966f28d82f32623e600d23ad2b85640e9c5627bf3c7fd5ed\"" Jan 29 12:00:58.227459 containerd[2099]: time="2025-01-29T12:00:58.225910316Z" level=info msg="TearDown network for sandbox \"1afaa95969e1279a966f28d82f32623e600d23ad2b85640e9c5627bf3c7fd5ed\" successfully" Jan 29 12:00:58.240125 containerd[2099]: time="2025-01-29T12:00:58.240087827Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1afaa95969e1279a966f28d82f32623e600d23ad2b85640e9c5627bf3c7fd5ed\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:00:58.240454 containerd[2099]: time="2025-01-29T12:00:58.240423845Z" level=info msg="RemovePodSandbox \"1afaa95969e1279a966f28d82f32623e600d23ad2b85640e9c5627bf3c7fd5ed\" returns successfully" Jan 29 12:00:58.240928 containerd[2099]: time="2025-01-29T12:00:58.240901780Z" level=info msg="StopPodSandbox for \"339f17df78eff15909c00d4eeae5d9d5ff19a0bbfd3490c04fa4957b605dbdea\"" Jan 29 12:00:58.241017 containerd[2099]: time="2025-01-29T12:00:58.240984651Z" level=info msg="TearDown network for sandbox \"339f17df78eff15909c00d4eeae5d9d5ff19a0bbfd3490c04fa4957b605dbdea\" successfully" Jan 29 12:00:58.241017 containerd[2099]: time="2025-01-29T12:00:58.240999856Z" level=info msg="StopPodSandbox for \"339f17df78eff15909c00d4eeae5d9d5ff19a0bbfd3490c04fa4957b605dbdea\" returns successfully" Jan 29 12:00:58.241861 containerd[2099]: time="2025-01-29T12:00:58.241285377Z" level=info msg="RemovePodSandbox for \"339f17df78eff15909c00d4eeae5d9d5ff19a0bbfd3490c04fa4957b605dbdea\"" Jan 29 12:00:58.241861 containerd[2099]: time="2025-01-29T12:00:58.241335375Z" level=info msg="Forcibly stopping sandbox \"339f17df78eff15909c00d4eeae5d9d5ff19a0bbfd3490c04fa4957b605dbdea\"" Jan 29 12:00:58.241861 containerd[2099]: time="2025-01-29T12:00:58.241401134Z" level=info msg="TearDown network for sandbox \"339f17df78eff15909c00d4eeae5d9d5ff19a0bbfd3490c04fa4957b605dbdea\" successfully" Jan 29 12:00:58.246447 containerd[2099]: time="2025-01-29T12:00:58.246406890Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"339f17df78eff15909c00d4eeae5d9d5ff19a0bbfd3490c04fa4957b605dbdea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:00:58.246542 containerd[2099]: time="2025-01-29T12:00:58.246468518Z" level=info msg="RemovePodSandbox \"339f17df78eff15909c00d4eeae5d9d5ff19a0bbfd3490c04fa4957b605dbdea\" returns successfully" Jan 29 12:01:10.546672 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72e553649f8eea9e33a60a15c8c5dcf487d68a8a3a12fd9699c2687051ef02b9-rootfs.mount: Deactivated successfully. Jan 29 12:01:10.569888 containerd[2099]: time="2025-01-29T12:01:10.569818067Z" level=info msg="shim disconnected" id=72e553649f8eea9e33a60a15c8c5dcf487d68a8a3a12fd9699c2687051ef02b9 namespace=k8s.io Jan 29 12:01:10.569888 containerd[2099]: time="2025-01-29T12:01:10.569882101Z" level=warning msg="cleaning up after shim disconnected" id=72e553649f8eea9e33a60a15c8c5dcf487d68a8a3a12fd9699c2687051ef02b9 namespace=k8s.io Jan 29 12:01:10.569888 containerd[2099]: time="2025-01-29T12:01:10.569895850Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:01:10.877623 kubelet[3378]: I0129 12:01:10.877018 3378 scope.go:117] "RemoveContainer" containerID="72e553649f8eea9e33a60a15c8c5dcf487d68a8a3a12fd9699c2687051ef02b9" Jan 29 12:01:10.883304 containerd[2099]: time="2025-01-29T12:01:10.883266972Z" level=info msg="CreateContainer within sandbox \"3f51fe7024701af882d989d533cd3a05556fe7065637aba5b07d15a22a70825a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 29 12:01:10.935188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2859715521.mount: Deactivated successfully. Jan 29 12:01:10.943089 kubelet[3378]: E0129 12:01:10.943024 3378 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.218:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-218?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 12:01:10.944106 containerd[2099]: time="2025-01-29T12:01:10.944060908Z" level=info msg="CreateContainer within sandbox \"3f51fe7024701af882d989d533cd3a05556fe7065637aba5b07d15a22a70825a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"06d87be9e18d2945a2856e22b8d7eafa87f4f0bdfd86b97317359aeaa931b3a4\"" Jan 29 12:01:10.945615 containerd[2099]: time="2025-01-29T12:01:10.945583235Z" level=info msg="StartContainer for \"06d87be9e18d2945a2856e22b8d7eafa87f4f0bdfd86b97317359aeaa931b3a4\"" Jan 29 12:01:11.048539 containerd[2099]: time="2025-01-29T12:01:11.048488941Z" level=info msg="StartContainer for \"06d87be9e18d2945a2856e22b8d7eafa87f4f0bdfd86b97317359aeaa931b3a4\" returns successfully" Jan 29 12:01:16.688784 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6bac1147184f4270e96bb18f005779bfea829a27881c9277470b0cfb255f3c06-rootfs.mount: Deactivated successfully. Jan 29 12:01:16.708295 containerd[2099]: time="2025-01-29T12:01:16.708225907Z" level=info msg="shim disconnected" id=6bac1147184f4270e96bb18f005779bfea829a27881c9277470b0cfb255f3c06 namespace=k8s.io Jan 29 12:01:16.708295 containerd[2099]: time="2025-01-29T12:01:16.708288815Z" level=warning msg="cleaning up after shim disconnected" id=6bac1147184f4270e96bb18f005779bfea829a27881c9277470b0cfb255f3c06 namespace=k8s.io Jan 29 12:01:16.708295 containerd[2099]: time="2025-01-29T12:01:16.708300367Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:01:16.905508 kubelet[3378]: I0129 12:01:16.905404 3378 scope.go:117] "RemoveContainer" containerID="6bac1147184f4270e96bb18f005779bfea829a27881c9277470b0cfb255f3c06" Jan 29 12:01:16.911888 containerd[2099]: time="2025-01-29T12:01:16.911782919Z" level=info msg="CreateContainer within sandbox \"ab6c616e886ca451700fa7d7ec503eb794f4090cb790e8a1981b589272a8a89c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 29 12:01:16.943441 containerd[2099]: time="2025-01-29T12:01:16.943286564Z" level=info msg="CreateContainer within sandbox \"ab6c616e886ca451700fa7d7ec503eb794f4090cb790e8a1981b589272a8a89c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"47d0127fb82066d6c6227aff9d4f3d888c74d4fc0cf70c562c3db25c9d99da9c\"" Jan 29 12:01:16.944114 containerd[2099]: time="2025-01-29T12:01:16.944076427Z" level=info msg="StartContainer for \"47d0127fb82066d6c6227aff9d4f3d888c74d4fc0cf70c562c3db25c9d99da9c\"" Jan 29 12:01:17.022673 containerd[2099]: time="2025-01-29T12:01:17.022626154Z" level=info msg="StartContainer for \"47d0127fb82066d6c6227aff9d4f3d888c74d4fc0cf70c562c3db25c9d99da9c\" returns successfully" Jan 29 12:01:20.944655 kubelet[3378]: E0129 12:01:20.944263 3378 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.218:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-218?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"