Feb 13 19:44:41.262661 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:44:05 -00 2025 Feb 13 19:44:41.262709 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 19:44:41.262728 kernel: BIOS-provided physical RAM map: Feb 13 19:44:41.262742 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 19:44:41.262753 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 19:44:41.262765 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 19:44:41.262783 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Feb 13 19:44:41.262864 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Feb 13 19:44:41.262877 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Feb 13 19:44:41.262889 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 19:44:41.262900 kernel: NX (Execute Disable) protection: active Feb 13 19:44:41.262912 kernel: APIC: Static calls initialized Feb 13 19:44:41.262925 kernel: SMBIOS 2.7 present. Feb 13 19:44:41.262939 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Feb 13 19:44:41.262979 kernel: Hypervisor detected: KVM Feb 13 19:44:41.262992 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 19:44:41.263005 kernel: kvm-clock: using sched offset of 9462668401 cycles Feb 13 19:44:41.263019 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 19:44:41.263032 kernel: tsc: Detected 2499.996 MHz processor Feb 13 19:44:41.263045 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 19:44:41.263059 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 19:44:41.263075 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Feb 13 19:44:41.263088 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 19:44:41.263101 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 19:44:41.263114 kernel: Using GB pages for direct mapping Feb 13 19:44:41.263127 kernel: ACPI: Early table checksum verification disabled Feb 13 19:44:41.263140 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Feb 13 19:44:41.263153 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Feb 13 19:44:41.263168 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 19:44:41.263181 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Feb 13 19:44:41.263197 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Feb 13 19:44:41.263210 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 13 19:44:41.263224 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 19:44:41.263237 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Feb 13 19:44:41.263251 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 19:44:41.263264 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Feb 13 19:44:41.263277 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Feb 13 19:44:41.263291 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 13 19:44:41.263304 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Feb 13 19:44:41.263321 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Feb 13 19:44:41.263340 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Feb 13 19:44:41.263354 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Feb 13 19:44:41.263485 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Feb 13 19:44:41.263502 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Feb 13 19:44:41.263520 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Feb 13 19:44:41.263535 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Feb 13 19:44:41.263549 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Feb 13 19:44:41.263562 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Feb 13 19:44:41.263577 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 19:44:41.263590 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 19:44:41.263604 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Feb 13 19:44:41.263618 kernel: NUMA: Initialized distance table, cnt=1 Feb 13 19:44:41.263631 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Feb 13 19:44:41.264270 kernel: Zone ranges: Feb 13 19:44:41.264284 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 19:44:41.264296 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Feb 13 19:44:41.264309 kernel: Normal empty Feb 13 19:44:41.264322 kernel: Movable zone start for each node Feb 13 19:44:41.264334 kernel: Early memory node ranges Feb 13 19:44:41.264346 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 19:44:41.264358 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Feb 13 19:44:41.264371 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Feb 13 19:44:41.271046 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:44:41.271092 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 19:44:41.271105 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Feb 13 19:44:41.271118 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 13 19:44:41.271130 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 19:44:41.271143 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Feb 13 19:44:41.271154 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 19:44:41.271167 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 19:44:41.271179 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 19:44:41.271192 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 19:44:41.271212 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 19:44:41.271224 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 19:44:41.271237 kernel: TSC deadline timer available Feb 13 19:44:41.271249 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 19:44:41.271261 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 19:44:41.271275 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Feb 13 19:44:41.271286 kernel: Booting paravirtualized kernel on KVM Feb 13 19:44:41.271299 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 19:44:41.271312 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 19:44:41.271332 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 19:44:41.271345 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 19:44:41.271358 kernel: pcpu-alloc: [0] 0 1 Feb 13 19:44:41.271371 kernel: kvm-guest: PV spinlocks enabled Feb 13 19:44:41.271383 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 19:44:41.272581 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 19:44:41.272607 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:44:41.272619 kernel: random: crng init done Feb 13 19:44:41.272641 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:44:41.272655 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 19:44:41.272666 kernel: Fallback order for Node 0: 0 Feb 13 19:44:41.272678 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Feb 13 19:44:41.272690 kernel: Policy zone: DMA32 Feb 13 19:44:41.272703 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:44:41.272716 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2301K rwdata, 22736K rodata, 42976K init, 2216K bss, 125152K reserved, 0K cma-reserved) Feb 13 19:44:41.272729 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 19:44:41.272747 kernel: Kernel/User page tables isolation: enabled Feb 13 19:44:41.272760 kernel: ftrace: allocating 37923 entries in 149 pages Feb 13 19:44:41.272774 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 19:44:41.272786 kernel: Dynamic Preempt: voluntary Feb 13 19:44:41.272799 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:44:41.272814 kernel: rcu: RCU event tracing is enabled. Feb 13 19:44:41.272827 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 19:44:41.272840 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:44:41.272852 kernel: Rude variant of Tasks RCU enabled. Feb 13 19:44:41.272866 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:44:41.272882 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:44:41.272895 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 19:44:41.272908 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 19:44:41.272921 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:44:41.272934 kernel: Console: colour VGA+ 80x25 Feb 13 19:44:41.272964 kernel: printk: console [ttyS0] enabled Feb 13 19:44:41.272977 kernel: ACPI: Core revision 20230628 Feb 13 19:44:41.272990 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Feb 13 19:44:41.273003 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 19:44:41.273020 kernel: x2apic enabled Feb 13 19:44:41.273033 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 19:44:41.273059 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Feb 13 19:44:41.273075 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Feb 13 19:44:41.273090 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 19:44:41.273103 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 19:44:41.273116 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 19:44:41.273129 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 19:44:41.273143 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 19:44:41.273157 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 19:44:41.273170 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 13 19:44:41.273184 kernel: RETBleed: Vulnerable Feb 13 19:44:41.273201 kernel: Speculative Store Bypass: Vulnerable Feb 13 19:44:41.273214 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 19:44:41.273228 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 19:44:41.273241 kernel: GDS: Unknown: Dependent on hypervisor status Feb 13 19:44:41.273255 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 19:44:41.273268 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 19:44:41.273284 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 19:44:41.273298 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 13 19:44:41.273312 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 13 19:44:41.273325 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 13 19:44:41.273339 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 13 19:44:41.273352 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 13 19:44:41.273366 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Feb 13 19:44:41.273380 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 19:44:41.273394 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 13 19:44:41.273407 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 13 19:44:41.273421 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Feb 13 19:44:41.273436 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Feb 13 19:44:41.273450 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Feb 13 19:44:41.275330 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Feb 13 19:44:41.275351 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Feb 13 19:44:41.275366 kernel: Freeing SMP alternatives memory: 32K Feb 13 19:44:41.275379 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:44:41.275725 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:44:41.275758 kernel: landlock: Up and running. Feb 13 19:44:41.275774 kernel: SELinux: Initializing. Feb 13 19:44:41.275791 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 19:44:41.275807 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 19:44:41.275824 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 13 19:44:41.275849 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:44:41.275864 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:44:41.275881 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:44:41.275897 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 13 19:44:41.275914 kernel: signal: max sigframe size: 3632 Feb 13 19:44:41.275930 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:44:41.275961 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:44:41.275977 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 19:44:41.275993 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:44:41.276013 kernel: smpboot: x86: Booting SMP configuration: Feb 13 19:44:41.276029 kernel: .... node #0, CPUs: #1 Feb 13 19:44:41.276047 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 13 19:44:41.276064 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 19:44:41.276082 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 19:44:41.276098 kernel: smpboot: Max logical packages: 1 Feb 13 19:44:41.276114 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Feb 13 19:44:41.276131 kernel: devtmpfs: initialized Feb 13 19:44:41.276150 kernel: x86/mm: Memory block size: 128MB Feb 13 19:44:41.276167 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:44:41.276183 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 19:44:41.276199 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:44:41.276216 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:44:41.276232 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:44:41.276249 kernel: audit: type=2000 audit(1739475880.109:1): state=initialized audit_enabled=0 res=1 Feb 13 19:44:41.276265 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:44:41.276282 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 19:44:41.276302 kernel: cpuidle: using governor menu Feb 13 19:44:41.276318 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:44:41.276335 kernel: dca service started, version 1.12.1 Feb 13 19:44:41.276351 kernel: PCI: Using configuration type 1 for base access Feb 13 19:44:41.276367 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 19:44:41.276384 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:44:41.276401 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:44:41.276416 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:44:41.276433 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:44:41.276453 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:44:41.276469 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:44:41.276496 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:44:41.276513 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:44:41.276529 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 13 19:44:41.276546 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 19:44:41.276562 kernel: ACPI: Interpreter enabled Feb 13 19:44:41.276577 kernel: ACPI: PM: (supports S0 S5) Feb 13 19:44:41.278390 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 19:44:41.278408 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 19:44:41.278428 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 19:44:41.278443 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 13 19:44:41.278456 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:44:41.278702 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:44:41.278856 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 19:44:41.279015 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 19:44:41.279035 kernel: acpiphp: Slot [3] registered Feb 13 19:44:41.279056 kernel: acpiphp: Slot [4] registered Feb 13 19:44:41.279069 kernel: acpiphp: Slot [5] registered Feb 13 19:44:41.279084 kernel: acpiphp: Slot [6] registered Feb 13 19:44:41.279099 kernel: acpiphp: Slot [7] registered Feb 13 19:44:41.279113 kernel: acpiphp: Slot [8] registered Feb 13 19:44:41.279128 kernel: acpiphp: Slot [9] registered Feb 13 19:44:41.279143 kernel: acpiphp: Slot [10] registered Feb 13 19:44:41.279159 kernel: acpiphp: Slot [11] registered Feb 13 19:44:41.279176 kernel: acpiphp: Slot [12] registered Feb 13 19:44:41.279197 kernel: acpiphp: Slot [13] registered Feb 13 19:44:41.279214 kernel: acpiphp: Slot [14] registered Feb 13 19:44:41.279230 kernel: acpiphp: Slot [15] registered Feb 13 19:44:41.279246 kernel: acpiphp: Slot [16] registered Feb 13 19:44:41.279263 kernel: acpiphp: Slot [17] registered Feb 13 19:44:41.279280 kernel: acpiphp: Slot [18] registered Feb 13 19:44:41.279296 kernel: acpiphp: Slot [19] registered Feb 13 19:44:41.279312 kernel: acpiphp: Slot [20] registered Feb 13 19:44:41.279327 kernel: acpiphp: Slot [21] registered Feb 13 19:44:41.279344 kernel: acpiphp: Slot [22] registered Feb 13 19:44:41.279364 kernel: acpiphp: Slot [23] registered Feb 13 19:44:41.279382 kernel: acpiphp: Slot [24] registered Feb 13 19:44:41.281531 kernel: acpiphp: Slot [25] registered Feb 13 19:44:41.281557 kernel: acpiphp: Slot [26] registered Feb 13 19:44:41.281574 kernel: acpiphp: Slot [27] registered Feb 13 19:44:41.281589 kernel: acpiphp: Slot [28] registered Feb 13 19:44:41.281605 kernel: acpiphp: Slot [29] registered Feb 13 19:44:41.281621 kernel: acpiphp: Slot [30] registered Feb 13 19:44:41.281638 kernel: acpiphp: Slot [31] registered Feb 13 19:44:41.281661 kernel: PCI host bridge to bus 0000:00 Feb 13 19:44:41.281899 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 19:44:41.282042 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 19:44:41.282744 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 19:44:41.282887 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 13 19:44:41.283036 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:44:41.284997 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 19:44:41.285183 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 13 19:44:41.285340 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Feb 13 19:44:41.285481 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 13 19:44:41.285622 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 13 19:44:41.285761 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Feb 13 19:44:41.285913 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Feb 13 19:44:41.286101 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Feb 13 19:44:41.286247 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Feb 13 19:44:41.286383 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Feb 13 19:44:41.286521 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Feb 13 19:44:41.286660 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x180 took 13671 usecs Feb 13 19:44:41.286811 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Feb 13 19:44:41.286972 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Feb 13 19:44:41.287124 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 13 19:44:41.287265 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 19:44:41.287529 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 19:44:41.287682 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Feb 13 19:44:41.287836 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 19:44:41.287998 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Feb 13 19:44:41.288021 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 19:44:41.288044 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 19:44:41.288062 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 19:44:41.288078 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 19:44:41.288095 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 19:44:41.288112 kernel: iommu: Default domain type: Translated Feb 13 19:44:41.288129 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 19:44:41.288147 kernel: PCI: Using ACPI for IRQ routing Feb 13 19:44:41.288164 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 19:44:41.288181 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 19:44:41.288201 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Feb 13 19:44:41.288338 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Feb 13 19:44:41.288482 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Feb 13 19:44:41.289131 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 19:44:41.289157 kernel: vgaarb: loaded Feb 13 19:44:41.289173 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Feb 13 19:44:41.289188 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Feb 13 19:44:41.289203 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 19:44:41.289220 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:44:41.289240 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:44:41.289253 kernel: pnp: PnP ACPI init Feb 13 19:44:41.289269 kernel: pnp: PnP ACPI: found 5 devices Feb 13 19:44:41.289284 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 19:44:41.289297 kernel: NET: Registered PF_INET protocol family Feb 13 19:44:41.289312 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:44:41.289326 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 19:44:41.289340 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:44:41.289359 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 19:44:41.289372 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 19:44:41.289387 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 19:44:41.289401 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 19:44:41.289415 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 19:44:41.289430 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:44:41.289444 kernel: NET: Registered PF_XDP protocol family Feb 13 19:44:41.289585 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 19:44:41.289841 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 19:44:41.290016 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 19:44:41.290225 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 13 19:44:41.290379 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 19:44:41.290400 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:44:41.290415 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 19:44:41.290428 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Feb 13 19:44:41.290441 kernel: clocksource: Switched to clocksource tsc Feb 13 19:44:41.290454 kernel: Initialise system trusted keyrings Feb 13 19:44:41.290473 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 19:44:41.290487 kernel: Key type asymmetric registered Feb 13 19:44:41.290501 kernel: Asymmetric key parser 'x509' registered Feb 13 19:44:41.290515 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 19:44:41.290529 kernel: io scheduler mq-deadline registered Feb 13 19:44:41.290544 kernel: io scheduler kyber registered Feb 13 19:44:41.290558 kernel: io scheduler bfq registered Feb 13 19:44:41.290572 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 19:44:41.290586 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:44:41.290604 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 19:44:41.290618 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 19:44:41.290633 kernel: i8042: Warning: Keylock active Feb 13 19:44:41.290646 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 19:44:41.290661 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 19:44:41.290881 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 13 19:44:41.291081 kernel: rtc_cmos 00:00: registered as rtc0 Feb 13 19:44:41.291212 kernel: rtc_cmos 00:00: setting system clock to 2025-02-13T19:44:40 UTC (1739475880) Feb 13 19:44:41.291450 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 13 19:44:41.291472 kernel: intel_pstate: CPU model not supported Feb 13 19:44:41.291490 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:44:41.291507 kernel: Segment Routing with IPv6 Feb 13 19:44:41.291524 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:44:41.291541 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:44:41.291558 kernel: Key type dns_resolver registered Feb 13 19:44:41.291576 kernel: IPI shorthand broadcast: enabled Feb 13 19:44:41.291593 kernel: sched_clock: Marking stable (726077314, 285608928)->(1137349646, -125663404) Feb 13 19:44:41.291613 kernel: registered taskstats version 1 Feb 13 19:44:41.291630 kernel: Loading compiled-in X.509 certificates Feb 13 19:44:41.291644 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 0cc219a306b9e46e583adebba1820decbdc4307b' Feb 13 19:44:41.291659 kernel: Key type .fscrypt registered Feb 13 19:44:41.291676 kernel: Key type fscrypt-provisioning registered Feb 13 19:44:41.291693 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:44:41.291710 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:44:41.291727 kernel: ima: No architecture policies found Feb 13 19:44:41.291747 kernel: clk: Disabling unused clocks Feb 13 19:44:41.291763 kernel: Freeing unused kernel image (initmem) memory: 42976K Feb 13 19:44:41.291780 kernel: Write protecting the kernel read-only data: 36864k Feb 13 19:44:41.291797 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Feb 13 19:44:41.291814 kernel: Run /init as init process Feb 13 19:44:41.291831 kernel: with arguments: Feb 13 19:44:41.291848 kernel: /init Feb 13 19:44:41.291864 kernel: with environment: Feb 13 19:44:41.291880 kernel: HOME=/ Feb 13 19:44:41.291897 kernel: TERM=linux Feb 13 19:44:41.291916 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:44:41.291997 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:44:41.292015 systemd[1]: Detected virtualization amazon. Feb 13 19:44:41.292032 systemd[1]: Detected architecture x86-64. Feb 13 19:44:41.292045 systemd[1]: Running in initrd. Feb 13 19:44:41.292059 systemd[1]: No hostname configured, using default hostname. Feb 13 19:44:41.292074 systemd[1]: Hostname set to . Feb 13 19:44:41.293253 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:44:41.293276 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 19:44:41.293294 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:44:41.293313 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:44:41.293331 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:44:41.293349 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:44:41.293365 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:44:41.293385 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:44:41.293400 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:44:41.293418 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:44:41.293434 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:44:41.293451 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:44:41.293468 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:44:41.293486 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:44:41.293520 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:44:41.293535 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:44:41.293550 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:44:41.293564 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:44:41.293579 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:44:41.293593 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:44:41.293608 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:44:41.293621 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:44:41.293636 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:44:41.293655 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:44:41.293674 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:44:41.293689 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:44:41.293704 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:44:41.293719 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:44:41.293739 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:44:41.293754 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:44:41.293769 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:44:41.293787 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:44:41.293804 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:44:41.293855 systemd-journald[179]: Collecting audit messages is disabled. Feb 13 19:44:41.293897 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:44:41.293915 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:44:41.293934 systemd-journald[179]: Journal started Feb 13 19:44:41.293983 systemd-journald[179]: Runtime Journal (/run/log/journal/ec26b2407a5914cae6384a56db8d6bab) is 4.8M, max 38.6M, 33.7M free. Feb 13 19:44:41.307062 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:44:41.312034 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:44:41.310606 systemd-modules-load[180]: Inserted module 'overlay' Feb 13 19:44:41.337256 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:44:41.973522 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:44:41.973564 kernel: Bridge firewalling registered Feb 13 19:44:41.406887 systemd-modules-load[180]: Inserted module 'br_netfilter' Feb 13 19:44:42.000526 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:44:42.005436 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:44:42.010655 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:44:42.024246 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:44:42.027135 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:44:42.049020 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:44:42.055351 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:44:42.083284 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:44:42.089352 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:44:42.105204 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:44:42.108934 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:44:42.126559 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:44:42.176192 dracut-cmdline[215]: dracut-dracut-053 Feb 13 19:44:42.180835 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 19:44:42.197495 systemd-resolved[214]: Positive Trust Anchors: Feb 13 19:44:42.197521 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:44:42.197568 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:44:42.226486 systemd-resolved[214]: Defaulting to hostname 'linux'. Feb 13 19:44:42.232454 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:44:42.241899 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:44:42.359992 kernel: SCSI subsystem initialized Feb 13 19:44:42.387081 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:44:42.401520 kernel: iscsi: registered transport (tcp) Feb 13 19:44:42.429059 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:44:42.429146 kernel: QLogic iSCSI HBA Driver Feb 13 19:44:42.495190 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:44:42.506209 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:44:42.539136 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:44:42.539225 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:44:42.539246 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:44:42.583984 kernel: raid6: avx512x4 gen() 6341 MB/s Feb 13 19:44:42.600989 kernel: raid6: avx512x2 gen() 14923 MB/s Feb 13 19:44:42.617986 kernel: raid6: avx512x1 gen() 16423 MB/s Feb 13 19:44:42.637462 kernel: raid6: avx2x4 gen() 9861 MB/s Feb 13 19:44:42.653989 kernel: raid6: avx2x2 gen() 7105 MB/s Feb 13 19:44:42.671032 kernel: raid6: avx2x1 gen() 8895 MB/s Feb 13 19:44:42.671233 kernel: raid6: using algorithm avx512x1 gen() 16423 MB/s Feb 13 19:44:42.689081 kernel: raid6: .... xor() 14994 MB/s, rmw enabled Feb 13 19:44:42.689155 kernel: raid6: using avx512x2 recovery algorithm Feb 13 19:44:42.721980 kernel: xor: automatically using best checksumming function avx Feb 13 19:44:42.992143 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:44:43.034899 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:44:43.059223 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:44:43.099129 systemd-udevd[398]: Using default interface naming scheme 'v255'. Feb 13 19:44:43.108930 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:44:43.122096 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:44:43.157857 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Feb 13 19:44:43.203752 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:44:43.212594 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:44:43.325440 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:44:43.339592 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:44:43.417656 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:44:43.428000 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:44:43.431842 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:44:43.433786 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:44:43.443363 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:44:43.483814 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 19:44:43.512753 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 19:44:43.513120 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Feb 13 19:44:43.513323 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:19:fc:24:12:3f Feb 13 19:44:43.493143 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:44:43.541979 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 19:44:43.594912 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 19:44:43.595055 kernel: AES CTR mode by8 optimization enabled Feb 13 19:44:43.599895 (udev-worker)[450]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:44:43.612098 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 19:44:43.622809 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 13 19:44:43.645028 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 19:44:43.649392 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:44:43.650582 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:44:43.655239 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:44:43.666173 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:44:43.666364 kernel: GPT:9289727 != 16777215 Feb 13 19:44:43.666384 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:44:43.666396 kernel: GPT:9289727 != 16777215 Feb 13 19:44:43.666453 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:44:43.666467 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:44:43.656984 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:44:43.657161 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:44:43.658548 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:44:43.670745 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:44:43.882870 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (457) Feb 13 19:44:43.885489 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:44:43.888562 kernel: BTRFS: device fsid e9c87d9f-3864-4b45-9be4-80a5397f1fc6 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (455) Feb 13 19:44:43.896233 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:44:43.984022 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:44:44.005395 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 19:44:44.020971 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 19:44:44.048936 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:44:44.066091 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 19:44:44.067194 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 19:44:44.090159 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:44:44.130517 disk-uuid[633]: Primary Header is updated. Feb 13 19:44:44.130517 disk-uuid[633]: Secondary Entries is updated. Feb 13 19:44:44.130517 disk-uuid[633]: Secondary Header is updated. Feb 13 19:44:44.143176 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:44:44.158018 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:44:45.161993 disk-uuid[634]: The operation has completed successfully. Feb 13 19:44:45.186145 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:44:45.391581 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:44:45.391788 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:44:45.439372 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:44:45.447910 sh[893]: Success Feb 13 19:44:45.471970 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 19:44:45.599478 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:44:45.614702 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:44:45.646306 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:44:45.679436 kernel: BTRFS info (device dm-0): first mount of filesystem e9c87d9f-3864-4b45-9be4-80a5397f1fc6 Feb 13 19:44:45.679521 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:44:45.679541 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:44:45.681075 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:44:45.681115 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:44:45.774986 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 19:44:45.797109 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:44:45.798120 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:44:45.814258 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:44:45.818201 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:44:45.849014 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:44:45.849109 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:44:45.849134 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:44:45.856978 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:44:45.872939 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:44:45.871340 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:44:45.883117 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:44:45.896156 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:44:45.948046 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:44:45.960226 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:44:46.029858 systemd-networkd[1085]: lo: Link UP Feb 13 19:44:46.029873 systemd-networkd[1085]: lo: Gained carrier Feb 13 19:44:46.033055 systemd-networkd[1085]: Enumeration completed Feb 13 19:44:46.033624 systemd-networkd[1085]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:44:46.033629 systemd-networkd[1085]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:44:46.034870 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:44:46.040148 systemd[1]: Reached target network.target - Network. Feb 13 19:44:46.046497 systemd-networkd[1085]: eth0: Link UP Feb 13 19:44:46.046507 systemd-networkd[1085]: eth0: Gained carrier Feb 13 19:44:46.046524 systemd-networkd[1085]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:44:46.060072 systemd-networkd[1085]: eth0: DHCPv4 address 172.31.19.60/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:44:46.272014 ignition[1030]: Ignition 2.20.0 Feb 13 19:44:46.272029 ignition[1030]: Stage: fetch-offline Feb 13 19:44:46.272380 ignition[1030]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:44:46.272393 ignition[1030]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:44:46.274297 ignition[1030]: Ignition finished successfully Feb 13 19:44:46.278580 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:44:46.290663 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 19:44:46.315160 ignition[1093]: Ignition 2.20.0 Feb 13 19:44:46.315175 ignition[1093]: Stage: fetch Feb 13 19:44:46.315786 ignition[1093]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:44:46.315803 ignition[1093]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:44:46.315926 ignition[1093]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:44:46.381470 ignition[1093]: PUT result: OK Feb 13 19:44:46.387026 ignition[1093]: parsed url from cmdline: "" Feb 13 19:44:46.387038 ignition[1093]: no config URL provided Feb 13 19:44:46.387049 ignition[1093]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:44:46.387065 ignition[1093]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:44:46.387092 ignition[1093]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:44:46.389696 ignition[1093]: PUT result: OK Feb 13 19:44:46.389774 ignition[1093]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 19:44:46.395913 ignition[1093]: GET result: OK Feb 13 19:44:46.398939 ignition[1093]: parsing config with SHA512: 8b3a677237773dd22053b1306ebe578e34763eb50b5adedc78a91e5b1164ebb4a171a07a9fd233ebf83f2eaf20c5e1ec9c0d3485f3b88f67f04014e3f9417df1 Feb 13 19:44:46.408870 unknown[1093]: fetched base config from "system" Feb 13 19:44:46.408987 unknown[1093]: fetched base config from "system" Feb 13 19:44:46.409003 unknown[1093]: fetched user config from "aws" Feb 13 19:44:46.415331 ignition[1093]: fetch: fetch complete Feb 13 19:44:46.415507 ignition[1093]: fetch: fetch passed Feb 13 19:44:46.415629 ignition[1093]: Ignition finished successfully Feb 13 19:44:46.418643 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 19:44:46.431393 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:44:46.454448 ignition[1099]: Ignition 2.20.0 Feb 13 19:44:46.454464 ignition[1099]: Stage: kargs Feb 13 19:44:46.455798 ignition[1099]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:44:46.455812 ignition[1099]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:44:46.455930 ignition[1099]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:44:46.458274 ignition[1099]: PUT result: OK Feb 13 19:44:46.466564 ignition[1099]: kargs: kargs passed Feb 13 19:44:46.466662 ignition[1099]: Ignition finished successfully Feb 13 19:44:46.469038 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:44:46.485448 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:44:46.550568 ignition[1105]: Ignition 2.20.0 Feb 13 19:44:46.550583 ignition[1105]: Stage: disks Feb 13 19:44:46.551185 ignition[1105]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:44:46.551200 ignition[1105]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:44:46.551318 ignition[1105]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:44:46.552747 ignition[1105]: PUT result: OK Feb 13 19:44:46.566313 ignition[1105]: disks: disks passed Feb 13 19:44:46.566420 ignition[1105]: Ignition finished successfully Feb 13 19:44:46.568075 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:44:46.571095 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:44:46.572775 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:44:46.597209 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:44:46.614241 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:44:46.614375 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:44:46.635653 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:44:46.714237 systemd-fsck[1113]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:44:46.721326 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:44:46.733140 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:44:46.866094 kernel: EXT4-fs (nvme0n1p9): mounted filesystem c5993b0e-9201-4b44-aa01-79dc9d6c9fc9 r/w with ordered data mode. Quota mode: none. Feb 13 19:44:46.867156 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:44:46.869686 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:44:46.883094 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:44:46.887124 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:44:46.891750 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:44:46.901843 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:44:46.908521 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:44:46.940638 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:44:46.953612 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:44:46.974992 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1132) Feb 13 19:44:46.979216 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:44:46.979288 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:44:46.979308 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:44:46.992983 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:44:46.997664 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:44:47.375927 initrd-setup-root[1156]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:44:47.409354 initrd-setup-root[1163]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:44:47.417719 initrd-setup-root[1170]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:44:47.426985 initrd-setup-root[1177]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:44:47.485170 systemd-networkd[1085]: eth0: Gained IPv6LL Feb 13 19:44:47.823844 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:44:47.836119 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:44:47.846265 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:44:47.859477 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:44:47.860086 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:44:47.909006 ignition[1245]: INFO : Ignition 2.20.0 Feb 13 19:44:47.909006 ignition[1245]: INFO : Stage: mount Feb 13 19:44:47.912650 ignition[1245]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:44:47.912650 ignition[1245]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:44:47.912650 ignition[1245]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:44:47.922093 ignition[1245]: INFO : PUT result: OK Feb 13 19:44:47.926815 ignition[1245]: INFO : mount: mount passed Feb 13 19:44:47.928312 ignition[1245]: INFO : Ignition finished successfully Feb 13 19:44:47.930829 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:44:47.934376 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:44:47.948218 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:44:47.975321 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:44:48.004986 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1257) Feb 13 19:44:48.007979 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:44:48.008053 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:44:48.009100 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:44:48.014993 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:44:48.018521 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:44:48.057657 ignition[1274]: INFO : Ignition 2.20.0 Feb 13 19:44:48.057657 ignition[1274]: INFO : Stage: files Feb 13 19:44:48.060125 ignition[1274]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:44:48.060125 ignition[1274]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:44:48.060125 ignition[1274]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:44:48.067063 ignition[1274]: INFO : PUT result: OK Feb 13 19:44:48.070153 ignition[1274]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:44:48.074507 ignition[1274]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:44:48.074507 ignition[1274]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:44:48.097500 ignition[1274]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:44:48.100107 ignition[1274]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:44:48.102938 unknown[1274]: wrote ssh authorized keys file for user: core Feb 13 19:44:48.108885 ignition[1274]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:44:48.113109 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 19:44:48.115670 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 19:44:48.204034 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:44:48.865382 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 19:44:48.870308 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:44:48.870308 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 13 19:44:49.213190 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 19:44:49.413559 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:44:49.415881 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:44:49.415881 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:44:49.415881 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:44:49.439135 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:44:49.439135 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:44:49.439135 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:44:49.439135 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:44:49.439135 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:44:49.439135 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:44:49.439135 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:44:49.439135 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 19:44:49.439135 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 19:44:49.439135 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 19:44:49.439135 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Feb 13 19:44:49.719255 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 19:44:50.167626 ignition[1274]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 19:44:50.167626 ignition[1274]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 19:44:50.180039 ignition[1274]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:44:50.180039 ignition[1274]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:44:50.180039 ignition[1274]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 19:44:50.180039 ignition[1274]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:44:50.180039 ignition[1274]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:44:50.180039 ignition[1274]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:44:50.180039 ignition[1274]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:44:50.180039 ignition[1274]: INFO : files: files passed Feb 13 19:44:50.180039 ignition[1274]: INFO : Ignition finished successfully Feb 13 19:44:50.203262 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:44:50.215412 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:44:50.225098 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:44:50.240990 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:44:50.241153 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:44:50.257560 initrd-setup-root-after-ignition[1303]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:44:50.257560 initrd-setup-root-after-ignition[1303]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:44:50.263283 initrd-setup-root-after-ignition[1307]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:44:50.268602 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:44:50.273385 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:44:50.284373 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:44:50.397140 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:44:50.397350 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:44:50.404432 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:44:50.405568 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:44:50.406465 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:44:50.415402 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:44:50.450271 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:44:50.457455 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:44:50.497831 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:44:50.501879 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:44:50.506146 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:44:50.508536 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:44:50.508758 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:44:50.514025 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:44:50.516666 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:44:50.519336 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:44:50.531221 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:44:50.541456 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:44:50.551023 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:44:50.564740 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:44:50.572134 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:44:50.582295 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:44:50.586327 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:44:50.590211 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:44:50.590359 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:44:50.596524 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:44:50.638917 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:44:50.641764 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:44:50.650343 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:44:50.657964 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:44:50.658175 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:44:50.667328 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:44:50.667585 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:44:50.703924 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:44:50.704180 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:44:50.712332 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:44:50.727266 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:44:50.730632 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:44:50.732474 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:44:50.736606 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:44:50.736786 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:44:50.750154 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:44:50.750510 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:44:50.769868 ignition[1327]: INFO : Ignition 2.20.0 Feb 13 19:44:50.769868 ignition[1327]: INFO : Stage: umount Feb 13 19:44:50.775145 ignition[1327]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:44:50.775145 ignition[1327]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:44:50.775145 ignition[1327]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:44:50.775145 ignition[1327]: INFO : PUT result: OK Feb 13 19:44:50.791517 ignition[1327]: INFO : umount: umount passed Feb 13 19:44:50.793121 ignition[1327]: INFO : Ignition finished successfully Feb 13 19:44:50.796439 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:44:50.797779 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:44:50.797911 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:44:50.803128 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:44:50.803252 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:44:50.808272 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:44:50.808364 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:44:50.810245 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 19:44:50.811307 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 19:44:50.813769 systemd[1]: Stopped target network.target - Network. Feb 13 19:44:50.816180 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:44:50.818666 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:44:50.829120 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:44:50.831400 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:44:50.832765 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:44:50.835853 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:44:50.837558 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:44:50.840862 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:44:50.840934 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:44:50.845635 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:44:50.845698 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:44:50.849326 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:44:50.849402 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:44:50.851052 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:44:50.851134 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:44:50.856295 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:44:50.859221 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:44:50.882018 systemd-networkd[1085]: eth0: DHCPv6 lease lost Feb 13 19:44:50.882020 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:44:50.882269 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:44:50.886874 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:44:50.892685 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:44:50.896342 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:44:50.896530 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:44:50.924793 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:44:50.924869 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:44:50.929552 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:44:50.929655 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:44:50.937665 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:44:50.939154 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:44:50.939256 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:44:50.942730 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:44:50.944354 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:44:50.948209 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:44:50.948298 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:44:50.949887 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:44:50.950103 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:44:50.951907 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:44:50.981825 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:44:50.982733 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:44:50.994101 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:44:50.994217 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:44:51.000519 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:44:51.000587 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:44:51.004369 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:44:51.004500 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:44:51.011940 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:44:51.012044 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:44:51.016046 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:44:51.016116 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:44:51.042630 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:44:51.045902 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:44:51.046031 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:44:51.048144 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:44:51.048228 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:44:51.049446 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:44:51.049508 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:44:51.055779 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:44:51.055845 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:44:51.064882 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:44:51.065695 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:44:51.089007 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:44:51.089168 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:44:51.096105 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:44:51.109322 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:44:51.131941 systemd[1]: Switching root. Feb 13 19:44:51.183030 systemd-journald[179]: Journal stopped Feb 13 19:44:53.318538 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Feb 13 19:44:53.318647 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:44:53.318679 kernel: SELinux: policy capability open_perms=1 Feb 13 19:44:53.318702 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:44:53.318728 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:44:53.318748 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:44:53.318767 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:44:53.318793 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:44:53.318814 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:44:53.318835 kernel: audit: type=1403 audit(1739475891.623:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:44:53.318859 systemd[1]: Successfully loaded SELinux policy in 72.877ms. Feb 13 19:44:53.318889 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.976ms. Feb 13 19:44:53.318911 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:44:53.318932 systemd[1]: Detected virtualization amazon. Feb 13 19:44:53.321140 systemd[1]: Detected architecture x86-64. Feb 13 19:44:53.321184 systemd[1]: Detected first boot. Feb 13 19:44:53.321208 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:44:53.321231 zram_generator::config[1370]: No configuration found. Feb 13 19:44:53.321258 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:44:53.321288 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:44:53.321465 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:44:53.321492 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:44:53.321515 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:44:53.321538 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:44:53.321560 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:44:53.321582 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:44:53.321605 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:44:53.321633 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:44:53.321655 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:44:53.321678 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:44:53.321700 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:44:53.321721 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:44:53.321744 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:44:53.321766 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:44:53.321790 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:44:53.321813 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:44:53.321839 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:44:53.321860 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:44:53.321882 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:44:53.321904 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:44:53.321927 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:44:53.321962 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:44:53.321991 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:44:53.322015 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:44:53.322042 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:44:53.322065 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:44:53.322086 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:44:53.322108 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:44:53.322130 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:44:53.322151 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:44:53.322174 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:44:53.322196 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:44:53.322218 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:44:53.322244 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:44:53.322267 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:44:53.322290 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:44:53.322314 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:44:53.322338 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:44:53.322359 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:44:53.322384 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:44:53.322406 systemd[1]: Reached target machines.target - Containers. Feb 13 19:44:53.322432 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:44:53.322453 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:44:53.322476 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:44:53.322498 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:44:53.322521 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:44:53.322545 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:44:53.322565 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:44:53.322589 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:44:53.322611 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:44:53.322636 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:44:53.322659 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:44:53.322681 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:44:53.322699 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:44:53.322718 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:44:53.322737 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:44:53.322758 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:44:53.322778 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:44:53.322800 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:44:53.322826 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:44:53.322848 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:44:53.322870 systemd[1]: Stopped verity-setup.service. Feb 13 19:44:53.322893 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:44:53.322915 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:44:53.322937 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:44:53.322989 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:44:53.323014 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:44:53.323040 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:44:53.323062 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:44:53.323130 systemd-journald[1445]: Collecting audit messages is disabled. Feb 13 19:44:53.323171 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:44:53.323195 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:44:53.323215 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:44:53.323351 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:44:53.323378 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:44:53.323398 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:44:53.323417 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:44:53.323442 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:44:53.323471 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:44:53.323501 systemd-journald[1445]: Journal started Feb 13 19:44:53.323551 systemd-journald[1445]: Runtime Journal (/run/log/journal/ec26b2407a5914cae6384a56db8d6bab) is 4.8M, max 38.6M, 33.7M free. Feb 13 19:44:52.660304 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:44:53.330964 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:44:52.686216 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 19:44:52.687373 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:44:53.332014 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:44:53.363413 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:44:53.377218 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:44:53.391682 kernel: loop: module loaded Feb 13 19:44:53.381433 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:44:53.381542 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:44:53.384805 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:44:53.392976 kernel: fuse: init (API version 7.39) Feb 13 19:44:53.403575 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:44:53.430355 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:44:53.431843 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:44:53.443271 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:44:53.446482 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:44:53.448867 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:44:53.453100 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:44:53.460390 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:44:53.470192 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:44:53.479402 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:44:53.483942 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:44:53.486538 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:44:53.491283 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:44:53.491505 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:44:53.493353 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:44:53.552393 kernel: ACPI: bus type drm_connector registered Feb 13 19:44:53.537660 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:44:53.539443 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:44:53.541686 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:44:53.544271 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:44:53.544670 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:44:53.572755 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:44:53.583016 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:44:53.600519 systemd-journald[1445]: Time spent on flushing to /var/log/journal/ec26b2407a5914cae6384a56db8d6bab is 197.044ms for 965 entries. Feb 13 19:44:53.600519 systemd-journald[1445]: System Journal (/var/log/journal/ec26b2407a5914cae6384a56db8d6bab) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:44:53.825806 systemd-journald[1445]: Received client request to flush runtime journal. Feb 13 19:44:53.825913 kernel: loop0: detected capacity change from 0 to 138184 Feb 13 19:44:53.831306 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:44:53.831411 kernel: loop1: detected capacity change from 0 to 140992 Feb 13 19:44:53.585414 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:44:53.592424 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:44:53.602405 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:44:53.693503 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:44:53.738612 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:44:53.751760 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:44:53.761278 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:44:53.767127 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:44:53.801227 systemd-tmpfiles[1490]: ACLs are not supported, ignoring. Feb 13 19:44:53.801252 systemd-tmpfiles[1490]: ACLs are not supported, ignoring. Feb 13 19:44:53.820636 udevadm[1512]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:44:53.826850 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:44:53.840628 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:44:53.843034 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:44:53.905744 kernel: loop2: detected capacity change from 0 to 205544 Feb 13 19:44:53.971314 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:44:53.981432 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:44:54.001054 kernel: loop3: detected capacity change from 0 to 62848 Feb 13 19:44:54.087304 systemd-tmpfiles[1522]: ACLs are not supported, ignoring. Feb 13 19:44:54.088832 systemd-tmpfiles[1522]: ACLs are not supported, ignoring. Feb 13 19:44:54.114211 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:44:54.254982 kernel: loop4: detected capacity change from 0 to 138184 Feb 13 19:44:54.324427 kernel: loop5: detected capacity change from 0 to 140992 Feb 13 19:44:54.395068 kernel: loop6: detected capacity change from 0 to 205544 Feb 13 19:44:54.439992 kernel: loop7: detected capacity change from 0 to 62848 Feb 13 19:44:54.465182 (sd-merge)[1526]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 19:44:54.466598 (sd-merge)[1526]: Merged extensions into '/usr'. Feb 13 19:44:54.508316 systemd[1]: Reloading requested from client PID 1489 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:44:54.508340 systemd[1]: Reloading... Feb 13 19:44:54.937108 zram_generator::config[1555]: No configuration found. Feb 13 19:44:55.303564 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:44:55.325979 ldconfig[1483]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:44:55.429581 systemd[1]: Reloading finished in 909 ms. Feb 13 19:44:55.469246 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:44:55.471525 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:44:55.486334 systemd[1]: Starting ensure-sysext.service... Feb 13 19:44:55.513361 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:44:55.538284 systemd[1]: Reloading requested from client PID 1601 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:44:55.538303 systemd[1]: Reloading... Feb 13 19:44:55.644411 systemd-tmpfiles[1602]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:44:55.646229 systemd-tmpfiles[1602]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:44:55.647750 systemd-tmpfiles[1602]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:44:55.648374 systemd-tmpfiles[1602]: ACLs are not supported, ignoring. Feb 13 19:44:55.648567 systemd-tmpfiles[1602]: ACLs are not supported, ignoring. Feb 13 19:44:55.653838 systemd-tmpfiles[1602]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:44:55.653853 systemd-tmpfiles[1602]: Skipping /boot Feb 13 19:44:55.694200 systemd-tmpfiles[1602]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:44:55.694392 systemd-tmpfiles[1602]: Skipping /boot Feb 13 19:44:55.810301 zram_generator::config[1635]: No configuration found. Feb 13 19:44:56.070141 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:44:56.168976 systemd[1]: Reloading finished in 630 ms. Feb 13 19:44:56.192938 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:44:56.198579 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:44:56.244364 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:44:56.255867 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:44:56.261211 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:44:56.272212 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:44:56.280368 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:44:56.283765 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:44:56.302490 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:44:56.302886 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:44:56.325899 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:44:56.351290 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:44:56.375337 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:44:56.376914 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:44:56.377123 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:44:56.384876 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:44:56.385939 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:44:56.388599 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:44:56.389304 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:44:56.401736 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:44:56.403251 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:44:56.412408 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:44:56.415024 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:44:56.415472 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:44:56.429179 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:44:56.432684 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:44:56.434390 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:44:56.435426 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:44:56.441690 systemd-udevd[1688]: Using default interface naming scheme 'v255'. Feb 13 19:44:56.453507 systemd[1]: Finished ensure-sysext.service. Feb 13 19:44:56.456373 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:44:56.456594 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:44:56.465411 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:44:56.473535 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:44:56.476329 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:44:56.478028 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:44:56.500304 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:44:56.500751 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:44:56.503287 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:44:56.601472 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:44:56.608374 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:44:56.635223 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:44:56.638542 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:44:56.656781 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:44:56.685180 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:44:56.688857 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:44:56.724478 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:44:56.737298 augenrules[1733]: No rules Feb 13 19:44:56.740726 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:44:56.743128 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:44:56.951902 systemd-resolved[1684]: Positive Trust Anchors: Feb 13 19:44:56.956770 systemd-resolved[1684]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:44:56.956832 systemd-resolved[1684]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:44:56.980737 systemd-networkd[1717]: lo: Link UP Feb 13 19:44:56.980753 systemd-networkd[1717]: lo: Gained carrier Feb 13 19:44:56.981702 systemd-networkd[1717]: Enumeration completed Feb 13 19:44:56.982143 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:44:56.982893 systemd-resolved[1684]: Defaulting to hostname 'linux'. Feb 13 19:44:56.993732 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:44:56.995502 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:44:56.997674 systemd[1]: Reached target network.target - Network. Feb 13 19:44:56.999053 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:44:57.016405 (udev-worker)[1727]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:44:57.043992 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:44:57.162001 systemd-networkd[1717]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:44:57.162019 systemd-networkd[1717]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:44:57.168722 systemd-networkd[1717]: eth0: Link UP Feb 13 19:44:57.168997 systemd-networkd[1717]: eth0: Gained carrier Feb 13 19:44:57.169102 systemd-networkd[1717]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:44:57.187193 systemd-networkd[1717]: eth0: DHCPv4 address 172.31.19.60/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:44:57.193184 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Feb 13 19:44:57.204164 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 19:44:57.208999 kernel: ACPI: button: Power Button [PWRF] Feb 13 19:44:57.215135 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Feb 13 19:44:57.223974 kernel: ACPI: button: Sleep Button [SLPF] Feb 13 19:44:57.240188 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1742) Feb 13 19:44:57.300997 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Feb 13 19:44:57.495007 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 19:44:57.531167 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:44:57.595864 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:44:57.596470 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:44:57.606559 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:44:57.612486 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:44:57.636922 lvm[1846]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:44:57.671477 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:44:57.682814 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:44:57.689806 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:44:57.710648 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:44:57.758918 lvm[1851]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:44:57.801137 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:44:57.962456 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:44:57.966519 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:44:57.968223 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:44:57.970017 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:44:57.971872 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:44:57.973541 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:44:57.975322 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:44:57.977096 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:44:57.977146 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:44:57.978256 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:44:57.981226 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:44:57.985826 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:44:57.996671 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:44:57.998663 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:44:58.000046 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:44:58.001465 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:44:58.002650 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:44:58.002692 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:44:58.009117 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:44:58.016221 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 19:44:58.025193 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:44:58.028865 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:44:58.044441 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:44:58.045672 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:44:58.049375 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:44:58.087617 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 19:44:58.105996 jq[1861]: false Feb 13 19:44:58.106871 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:44:58.119699 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 19:44:58.129322 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:44:58.146818 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:44:58.162347 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:44:58.167387 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:44:58.168229 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:44:58.176270 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:44:58.186379 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:44:58.195376 dbus-daemon[1860]: [system] SELinux support is enabled Feb 13 19:44:58.205358 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:44:58.204127 dbus-daemon[1860]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1717 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 19:44:58.217131 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:44:58.217981 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:44:58.242464 (ntainerd)[1881]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:44:58.255143 ntpd[1864]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:07:00 UTC 2025 (1): Starting Feb 13 19:44:58.300709 ntpd[1864]: 13 Feb 19:44:58 ntpd[1864]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:07:00 UTC 2025 (1): Starting Feb 13 19:44:58.300709 ntpd[1864]: 13 Feb 19:44:58 ntpd[1864]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:44:58.300709 ntpd[1864]: 13 Feb 19:44:58 ntpd[1864]: ---------------------------------------------------- Feb 13 19:44:58.300709 ntpd[1864]: 13 Feb 19:44:58 ntpd[1864]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:44:58.300709 ntpd[1864]: 13 Feb 19:44:58 ntpd[1864]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:44:58.300709 ntpd[1864]: 13 Feb 19:44:58 ntpd[1864]: corporation. Support and training for ntp-4 are Feb 13 19:44:58.300709 ntpd[1864]: 13 Feb 19:44:58 ntpd[1864]: available at https://www.nwtime.org/support Feb 13 19:44:58.300709 ntpd[1864]: 13 Feb 19:44:58 ntpd[1864]: ---------------------------------------------------- Feb 13 19:44:58.300709 ntpd[1864]: 13 Feb 19:44:58 ntpd[1864]: proto: precision = 0.065 usec (-24) Feb 13 19:44:58.300709 ntpd[1864]: 13 Feb 19:44:58 ntpd[1864]: basedate set to 2025-02-01 Feb 13 19:44:58.300709 ntpd[1864]: 13 Feb 19:44:58 ntpd[1864]: gps base set to 2025-02-02 (week 2352) Feb 13 19:44:58.300709 ntpd[1864]: 13 Feb 19:44:58 ntpd[1864]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:44:58.300709 ntpd[1864]: 13 Feb 19:44:58 ntpd[1864]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:44:58.300709 ntpd[1864]: 13 Feb 19:44:58 ntpd[1864]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:44:58.300709 ntpd[1864]: 13 Feb 19:44:58 ntpd[1864]: Listen normally on 3 eth0 172.31.19.60:123 Feb 13 19:44:58.300709 ntpd[1864]: 13 Feb 19:44:58 ntpd[1864]: Listen normally on 4 lo [::1]:123 Feb 13 19:44:58.300709 ntpd[1864]: 13 Feb 19:44:58 ntpd[1864]: bind(21) AF_INET6 fe80::419:fcff:fe24:123f%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:44:58.300709 ntpd[1864]: 13 Feb 19:44:58 ntpd[1864]: unable to create socket on eth0 (5) for fe80::419:fcff:fe24:123f%2#123 Feb 13 19:44:58.300709 ntpd[1864]: 13 Feb 19:44:58 ntpd[1864]: failed to init interface for address fe80::419:fcff:fe24:123f%2 Feb 13 19:44:58.300709 ntpd[1864]: 13 Feb 19:44:58 ntpd[1864]: Listening on routing socket on fd #21 for interface updates Feb 13 19:44:58.300709 ntpd[1864]: 13 Feb 19:44:58 ntpd[1864]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:44:58.300709 ntpd[1864]: 13 Feb 19:44:58 ntpd[1864]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:44:58.302157 update_engine[1875]: I20250213 19:44:58.284089 1875 main.cc:92] Flatcar Update Engine starting Feb 13 19:44:58.255180 ntpd[1864]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:44:58.255192 ntpd[1864]: ---------------------------------------------------- Feb 13 19:44:58.255201 ntpd[1864]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:44:58.312525 jq[1876]: true Feb 13 19:44:58.255211 ntpd[1864]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:44:58.312890 update_engine[1875]: I20250213 19:44:58.304162 1875 update_check_scheduler.cc:74] Next update check in 10m45s Feb 13 19:44:58.255221 ntpd[1864]: corporation. Support and training for ntp-4 are Feb 13 19:44:58.255231 ntpd[1864]: available at https://www.nwtime.org/support Feb 13 19:44:58.255241 ntpd[1864]: ---------------------------------------------------- Feb 13 19:44:58.260026 ntpd[1864]: proto: precision = 0.065 usec (-24) Feb 13 19:44:58.263412 ntpd[1864]: basedate set to 2025-02-01 Feb 13 19:44:58.263439 ntpd[1864]: gps base set to 2025-02-02 (week 2352) Feb 13 19:44:58.268766 ntpd[1864]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:44:58.268825 ntpd[1864]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:44:58.313467 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:44:58.271166 ntpd[1864]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:44:58.271220 ntpd[1864]: Listen normally on 3 eth0 172.31.19.60:123 Feb 13 19:44:58.271266 ntpd[1864]: Listen normally on 4 lo [::1]:123 Feb 13 19:44:58.271395 ntpd[1864]: bind(21) AF_INET6 fe80::419:fcff:fe24:123f%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:44:58.271518 ntpd[1864]: unable to create socket on eth0 (5) for fe80::419:fcff:fe24:123f%2#123 Feb 13 19:44:58.316635 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:44:58.271537 ntpd[1864]: failed to init interface for address fe80::419:fcff:fe24:123f%2 Feb 13 19:44:58.316680 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:44:58.271580 ntpd[1864]: Listening on routing socket on fd #21 for interface updates Feb 13 19:44:58.277619 ntpd[1864]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:44:58.277661 ntpd[1864]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:44:58.321189 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:44:58.321229 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:44:58.332317 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:44:58.347784 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:44:58.351090 dbus-daemon[1860]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 19:44:58.354960 extend-filesystems[1862]: Found loop4 Feb 13 19:44:58.356451 extend-filesystems[1862]: Found loop5 Feb 13 19:44:58.356451 extend-filesystems[1862]: Found loop6 Feb 13 19:44:58.361853 extend-filesystems[1862]: Found loop7 Feb 13 19:44:58.361853 extend-filesystems[1862]: Found nvme0n1 Feb 13 19:44:58.361853 extend-filesystems[1862]: Found nvme0n1p1 Feb 13 19:44:58.361853 extend-filesystems[1862]: Found nvme0n1p2 Feb 13 19:44:58.361853 extend-filesystems[1862]: Found nvme0n1p3 Feb 13 19:44:58.361853 extend-filesystems[1862]: Found usr Feb 13 19:44:58.361853 extend-filesystems[1862]: Found nvme0n1p4 Feb 13 19:44:58.361853 extend-filesystems[1862]: Found nvme0n1p6 Feb 13 19:44:58.361853 extend-filesystems[1862]: Found nvme0n1p7 Feb 13 19:44:58.361853 extend-filesystems[1862]: Found nvme0n1p9 Feb 13 19:44:58.361853 extend-filesystems[1862]: Checking size of /dev/nvme0n1p9 Feb 13 19:44:58.366764 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:44:58.368331 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:44:58.378503 systemd-networkd[1717]: eth0: Gained IPv6LL Feb 13 19:44:58.392994 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 19:44:58.401538 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:44:58.407763 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:44:58.415025 extend-filesystems[1862]: Resized partition /dev/nvme0n1p9 Feb 13 19:44:58.424315 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:44:58.480995 extend-filesystems[1908]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:44:58.491818 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 19:44:58.478292 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:44:58.483686 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:44:58.483995 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:44:58.587401 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 19:44:58.608410 jq[1893]: true Feb 13 19:44:58.614404 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 19:44:58.675596 tar[1888]: linux-amd64/helm Feb 13 19:44:58.720978 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 19:44:58.753315 extend-filesystems[1908]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 19:44:58.753315 extend-filesystems[1908]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:44:58.753315 extend-filesystems[1908]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 19:44:58.767390 extend-filesystems[1862]: Resized filesystem in /dev/nvme0n1p9 Feb 13 19:44:58.757730 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:44:58.769063 coreos-metadata[1859]: Feb 13 19:44:58.767 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:44:58.758011 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:44:58.772530 coreos-metadata[1859]: Feb 13 19:44:58.772 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 19:44:58.774816 coreos-metadata[1859]: Feb 13 19:44:58.774 INFO Fetch successful Feb 13 19:44:58.775104 coreos-metadata[1859]: Feb 13 19:44:58.775 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 19:44:58.803038 coreos-metadata[1859]: Feb 13 19:44:58.783 INFO Fetch successful Feb 13 19:44:58.803038 coreos-metadata[1859]: Feb 13 19:44:58.783 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 19:44:58.803038 coreos-metadata[1859]: Feb 13 19:44:58.800 INFO Fetch successful Feb 13 19:44:58.803038 coreos-metadata[1859]: Feb 13 19:44:58.800 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 19:44:58.806243 coreos-metadata[1859]: Feb 13 19:44:58.805 INFO Fetch successful Feb 13 19:44:58.806243 coreos-metadata[1859]: Feb 13 19:44:58.805 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 19:44:58.807843 coreos-metadata[1859]: Feb 13 19:44:58.807 INFO Fetch failed with 404: resource not found Feb 13 19:44:58.808190 coreos-metadata[1859]: Feb 13 19:44:58.807 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 19:44:58.808827 coreos-metadata[1859]: Feb 13 19:44:58.808 INFO Fetch successful Feb 13 19:44:58.808907 coreos-metadata[1859]: Feb 13 19:44:58.808 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 19:44:58.815931 coreos-metadata[1859]: Feb 13 19:44:58.815 INFO Fetch successful Feb 13 19:44:58.815931 coreos-metadata[1859]: Feb 13 19:44:58.815 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 19:44:58.825048 coreos-metadata[1859]: Feb 13 19:44:58.820 INFO Fetch successful Feb 13 19:44:58.825048 coreos-metadata[1859]: Feb 13 19:44:58.820 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 19:44:58.825048 coreos-metadata[1859]: Feb 13 19:44:58.821 INFO Fetch successful Feb 13 19:44:58.825048 coreos-metadata[1859]: Feb 13 19:44:58.821 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 19:44:58.825048 coreos-metadata[1859]: Feb 13 19:44:58.822 INFO Fetch successful Feb 13 19:44:58.865117 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1741) Feb 13 19:44:58.892124 dbus-daemon[1860]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 19:44:58.894069 dbus-daemon[1860]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.4' (uid=0 pid=1902 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 19:44:58.895785 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:44:58.897959 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 19:44:58.912314 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 19:44:58.948602 systemd-logind[1873]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 19:44:58.948633 systemd-logind[1873]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 13 19:44:58.948656 systemd-logind[1873]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 19:44:58.965144 systemd-logind[1873]: New seat seat0. Feb 13 19:44:58.996836 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:44:59.021132 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 19:44:59.023838 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:44:59.036584 polkitd[1957]: Started polkitd version 121 Feb 13 19:44:59.052042 bash[1973]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:44:59.046548 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:44:59.059523 systemd[1]: Starting sshkeys.service... Feb 13 19:44:59.097228 polkitd[1957]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 19:44:59.097328 polkitd[1957]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 19:44:59.111088 polkitd[1957]: Finished loading, compiling and executing 2 rules Feb 13 19:44:59.113267 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 19:44:59.112903 dbus-daemon[1860]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 19:44:59.118175 polkitd[1957]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 19:44:59.142585 amazon-ssm-agent[1919]: Initializing new seelog logger Feb 13 19:44:59.144817 amazon-ssm-agent[1919]: New Seelog Logger Creation Complete Feb 13 19:44:59.149982 amazon-ssm-agent[1919]: 2025/02/13 19:44:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:44:59.149982 amazon-ssm-agent[1919]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:44:59.149982 amazon-ssm-agent[1919]: 2025/02/13 19:44:59 processing appconfig overrides Feb 13 19:44:59.159455 amazon-ssm-agent[1919]: 2025/02/13 19:44:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:44:59.159455 amazon-ssm-agent[1919]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:44:59.159455 amazon-ssm-agent[1919]: 2025/02/13 19:44:59 processing appconfig overrides Feb 13 19:44:59.160599 amazon-ssm-agent[1919]: 2025-02-13 19:44:59 INFO Proxy environment variables: Feb 13 19:44:59.170836 amazon-ssm-agent[1919]: 2025/02/13 19:44:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:44:59.182902 amazon-ssm-agent[1919]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:44:59.182902 amazon-ssm-agent[1919]: 2025/02/13 19:44:59 processing appconfig overrides Feb 13 19:44:59.179353 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 19:44:59.189322 amazon-ssm-agent[1919]: 2025/02/13 19:44:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:44:59.189322 amazon-ssm-agent[1919]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:44:59.189322 amazon-ssm-agent[1919]: 2025/02/13 19:44:59 processing appconfig overrides Feb 13 19:44:59.189189 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 19:44:59.247540 systemd-resolved[1684]: System hostname changed to 'ip-172-31-19-60'. Feb 13 19:44:59.248041 systemd-hostnamed[1902]: Hostname set to (transient) Feb 13 19:44:59.280060 amazon-ssm-agent[1919]: 2025-02-13 19:44:59 INFO https_proxy: Feb 13 19:44:59.338560 coreos-metadata[1999]: Feb 13 19:44:59.338 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:44:59.341061 coreos-metadata[1999]: Feb 13 19:44:59.340 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 19:44:59.343140 coreos-metadata[1999]: Feb 13 19:44:59.342 INFO Fetch successful Feb 13 19:44:59.343140 coreos-metadata[1999]: Feb 13 19:44:59.342 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 19:44:59.349262 coreos-metadata[1999]: Feb 13 19:44:59.344 INFO Fetch successful Feb 13 19:44:59.352855 unknown[1999]: wrote ssh authorized keys file for user: core Feb 13 19:44:59.385982 amazon-ssm-agent[1919]: 2025-02-13 19:44:59 INFO http_proxy: Feb 13 19:44:59.422983 update-ssh-keys[2034]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:44:59.427884 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 19:44:59.443473 systemd[1]: Finished sshkeys.service. Feb 13 19:44:59.470262 locksmithd[1898]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:44:59.482687 amazon-ssm-agent[1919]: 2025-02-13 19:44:59 INFO no_proxy: Feb 13 19:44:59.497743 containerd[1881]: time="2025-02-13T19:44:59.496128454Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:44:59.591185 amazon-ssm-agent[1919]: 2025-02-13 19:44:59 INFO Checking if agent identity type OnPrem can be assumed Feb 13 19:44:59.699036 amazon-ssm-agent[1919]: 2025-02-13 19:44:59 INFO Checking if agent identity type EC2 can be assumed Feb 13 19:44:59.745672 containerd[1881]: time="2025-02-13T19:44:59.745610966Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:44:59.751566 containerd[1881]: time="2025-02-13T19:44:59.751506029Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:44:59.752981 containerd[1881]: time="2025-02-13T19:44:59.751999203Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:44:59.752981 containerd[1881]: time="2025-02-13T19:44:59.752041820Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:44:59.752981 containerd[1881]: time="2025-02-13T19:44:59.752241428Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:44:59.752981 containerd[1881]: time="2025-02-13T19:44:59.752263564Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:44:59.752981 containerd[1881]: time="2025-02-13T19:44:59.752339602Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:44:59.752981 containerd[1881]: time="2025-02-13T19:44:59.752369637Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:44:59.752981 containerd[1881]: time="2025-02-13T19:44:59.752681140Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:44:59.752981 containerd[1881]: time="2025-02-13T19:44:59.752705923Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:44:59.752981 containerd[1881]: time="2025-02-13T19:44:59.752727661Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:44:59.752981 containerd[1881]: time="2025-02-13T19:44:59.752745268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:44:59.752981 containerd[1881]: time="2025-02-13T19:44:59.752861235Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:44:59.761805 containerd[1881]: time="2025-02-13T19:44:59.757324809Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:44:59.763014 containerd[1881]: time="2025-02-13T19:44:59.762203669Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:44:59.763180 containerd[1881]: time="2025-02-13T19:44:59.763149804Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:44:59.764567 containerd[1881]: time="2025-02-13T19:44:59.764530053Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:44:59.772178 containerd[1881]: time="2025-02-13T19:44:59.770133959Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:44:59.778987 containerd[1881]: time="2025-02-13T19:44:59.778847676Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:44:59.779147 containerd[1881]: time="2025-02-13T19:44:59.778941611Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:44:59.779247 containerd[1881]: time="2025-02-13T19:44:59.779231423Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:44:59.779347 containerd[1881]: time="2025-02-13T19:44:59.779332700Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:44:59.779443 containerd[1881]: time="2025-02-13T19:44:59.779428853Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:44:59.784100 containerd[1881]: time="2025-02-13T19:44:59.781773666Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:44:59.784100 containerd[1881]: time="2025-02-13T19:44:59.782191468Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:44:59.784100 containerd[1881]: time="2025-02-13T19:44:59.782407040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:44:59.784100 containerd[1881]: time="2025-02-13T19:44:59.782432991Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:44:59.784100 containerd[1881]: time="2025-02-13T19:44:59.782455793Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:44:59.784100 containerd[1881]: time="2025-02-13T19:44:59.782478328Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:44:59.784100 containerd[1881]: time="2025-02-13T19:44:59.782505261Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:44:59.784100 containerd[1881]: time="2025-02-13T19:44:59.782526717Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:44:59.784100 containerd[1881]: time="2025-02-13T19:44:59.782548770Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:44:59.784100 containerd[1881]: time="2025-02-13T19:44:59.782574269Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:44:59.784100 containerd[1881]: time="2025-02-13T19:44:59.782598254Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:44:59.784100 containerd[1881]: time="2025-02-13T19:44:59.782617418Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:44:59.784100 containerd[1881]: time="2025-02-13T19:44:59.782635169Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:44:59.784100 containerd[1881]: time="2025-02-13T19:44:59.782671882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:44:59.784764 containerd[1881]: time="2025-02-13T19:44:59.782694475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:44:59.784764 containerd[1881]: time="2025-02-13T19:44:59.782712908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:44:59.784764 containerd[1881]: time="2025-02-13T19:44:59.782735755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:44:59.784764 containerd[1881]: time="2025-02-13T19:44:59.782755146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:44:59.784764 containerd[1881]: time="2025-02-13T19:44:59.782776959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:44:59.784764 containerd[1881]: time="2025-02-13T19:44:59.782794966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:44:59.784764 containerd[1881]: time="2025-02-13T19:44:59.782815480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:44:59.784764 containerd[1881]: time="2025-02-13T19:44:59.782833529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:44:59.784764 containerd[1881]: time="2025-02-13T19:44:59.782980993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:44:59.784764 containerd[1881]: time="2025-02-13T19:44:59.783004168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:44:59.784764 containerd[1881]: time="2025-02-13T19:44:59.783021856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:44:59.784764 containerd[1881]: time="2025-02-13T19:44:59.783043064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:44:59.784764 containerd[1881]: time="2025-02-13T19:44:59.783065488Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:44:59.784764 containerd[1881]: time="2025-02-13T19:44:59.783102930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:44:59.784764 containerd[1881]: time="2025-02-13T19:44:59.783124298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:44:59.785428 containerd[1881]: time="2025-02-13T19:44:59.783141010Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:44:59.793995 amazon-ssm-agent[1919]: 2025-02-13 19:44:59 INFO Agent will take identity from EC2 Feb 13 19:44:59.794119 containerd[1881]: time="2025-02-13T19:44:59.790063344Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:44:59.794119 containerd[1881]: time="2025-02-13T19:44:59.790134734Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:44:59.794119 containerd[1881]: time="2025-02-13T19:44:59.790153623Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:44:59.794119 containerd[1881]: time="2025-02-13T19:44:59.790178118Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:44:59.794119 containerd[1881]: time="2025-02-13T19:44:59.790195262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:44:59.794119 containerd[1881]: time="2025-02-13T19:44:59.790219670Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:44:59.794119 containerd[1881]: time="2025-02-13T19:44:59.790236302Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:44:59.794119 containerd[1881]: time="2025-02-13T19:44:59.790258774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:44:59.794456 containerd[1881]: time="2025-02-13T19:44:59.790701537Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:44:59.794456 containerd[1881]: time="2025-02-13T19:44:59.790772912Z" level=info msg="Connect containerd service" Feb 13 19:44:59.794456 containerd[1881]: time="2025-02-13T19:44:59.790825291Z" level=info msg="using legacy CRI server" Feb 13 19:44:59.794456 containerd[1881]: time="2025-02-13T19:44:59.790834969Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:44:59.794456 containerd[1881]: time="2025-02-13T19:44:59.791039404Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:44:59.795533 containerd[1881]: time="2025-02-13T19:44:59.795162611Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:44:59.795659 containerd[1881]: time="2025-02-13T19:44:59.795642753Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:44:59.795729 containerd[1881]: time="2025-02-13T19:44:59.795716931Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:44:59.795868 containerd[1881]: time="2025-02-13T19:44:59.795780134Z" level=info msg="Start subscribing containerd event" Feb 13 19:44:59.795868 containerd[1881]: time="2025-02-13T19:44:59.795829395Z" level=info msg="Start recovering state" Feb 13 19:44:59.798113 containerd[1881]: time="2025-02-13T19:44:59.795919817Z" level=info msg="Start event monitor" Feb 13 19:44:59.798113 containerd[1881]: time="2025-02-13T19:44:59.795942307Z" level=info msg="Start snapshots syncer" Feb 13 19:44:59.798113 containerd[1881]: time="2025-02-13T19:44:59.796035391Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:44:59.798113 containerd[1881]: time="2025-02-13T19:44:59.796048020Z" level=info msg="Start streaming server" Feb 13 19:44:59.798113 containerd[1881]: time="2025-02-13T19:44:59.796138777Z" level=info msg="containerd successfully booted in 0.305355s" Feb 13 19:44:59.796298 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:44:59.908528 amazon-ssm-agent[1919]: 2025-02-13 19:44:59 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:45:00.002011 amazon-ssm-agent[1919]: 2025-02-13 19:44:59 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:45:00.018879 sshd_keygen[1914]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:45:00.110367 amazon-ssm-agent[1919]: 2025-02-13 19:44:59 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:45:00.149773 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:45:00.178473 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:45:00.214070 systemd[1]: Started sshd@0-172.31.19.60:22-139.178.89.65:34908.service - OpenSSH per-connection server daemon (139.178.89.65:34908). Feb 13 19:45:00.222136 amazon-ssm-agent[1919]: 2025-02-13 19:44:59 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 19:45:00.279462 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:45:00.279778 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:45:00.296456 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:45:00.325652 amazon-ssm-agent[1919]: 2025-02-13 19:44:59 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Feb 13 19:45:00.425430 amazon-ssm-agent[1919]: 2025-02-13 19:44:59 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 19:45:00.439381 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:45:00.453525 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:45:00.550116 amazon-ssm-agent[1919]: 2025-02-13 19:44:59 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 19:45:00.541518 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:45:00.545456 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:45:00.637473 amazon-ssm-agent[1919]: 2025-02-13 19:44:59 INFO [Registrar] Starting registrar module Feb 13 19:45:00.740347 amazon-ssm-agent[1919]: 2025-02-13 19:44:59 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 19:45:00.844706 amazon-ssm-agent[1919]: 2025-02-13 19:45:00 INFO [EC2Identity] EC2 registration was successful. Feb 13 19:45:00.893855 amazon-ssm-agent[1919]: 2025-02-13 19:45:00 INFO [CredentialRefresher] credentialRefresher has started Feb 13 19:45:00.893855 amazon-ssm-agent[1919]: 2025-02-13 19:45:00 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 19:45:00.893855 amazon-ssm-agent[1919]: 2025-02-13 19:45:00 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 19:45:00.942508 tar[1888]: linux-amd64/LICENSE Feb 13 19:45:00.943200 tar[1888]: linux-amd64/README.md Feb 13 19:45:00.953017 amazon-ssm-agent[1919]: 2025-02-13 19:45:00 INFO [CredentialRefresher] Next credential rotation will be in 31.741651541533333 minutes Feb 13 19:45:00.982840 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:45:01.042072 sshd[2094]: Accepted publickey for core from 139.178.89.65 port 34908 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:45:01.058199 sshd-session[2094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:45:01.208148 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:45:01.257040 ntpd[1864]: Listen normally on 6 eth0 [fe80::419:fcff:fe24:123f%2]:123 Feb 13 19:45:01.297260 ntpd[1864]: 13 Feb 19:45:01 ntpd[1864]: Listen normally on 6 eth0 [fe80::419:fcff:fe24:123f%2]:123 Feb 13 19:45:01.267040 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:45:01.398916 systemd-logind[1873]: New session 1 of user core. Feb 13 19:45:01.666087 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:45:01.715666 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:45:01.872199 (systemd)[2108]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:45:02.011033 amazon-ssm-agent[1919]: 2025-02-13 19:45:02 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 19:45:02.110086 amazon-ssm-agent[1919]: 2025-02-13 19:45:02 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2111) started Feb 13 19:45:02.210523 amazon-ssm-agent[1919]: 2025-02-13 19:45:02 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 19:45:02.982734 systemd[2108]: Queued start job for default target default.target. Feb 13 19:45:03.002610 systemd[2108]: Created slice app.slice - User Application Slice. Feb 13 19:45:03.002676 systemd[2108]: Reached target paths.target - Paths. Feb 13 19:45:03.002698 systemd[2108]: Reached target timers.target - Timers. Feb 13 19:45:03.025108 systemd[2108]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:45:03.125539 systemd[2108]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:45:03.126604 systemd[2108]: Reached target sockets.target - Sockets. Feb 13 19:45:03.126791 systemd[2108]: Reached target basic.target - Basic System. Feb 13 19:45:03.131383 systemd[2108]: Reached target default.target - Main User Target. Feb 13 19:45:03.131785 systemd[2108]: Startup finished in 1.173s. Feb 13 19:45:03.132254 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:45:03.145607 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:45:03.392568 systemd[1]: Started sshd@1-172.31.19.60:22-139.178.89.65:34922.service - OpenSSH per-connection server daemon (139.178.89.65:34922). Feb 13 19:45:03.635061 sshd[2131]: Accepted publickey for core from 139.178.89.65 port 34922 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:45:03.639919 sshd-session[2131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:45:03.686880 systemd-logind[1873]: New session 2 of user core. Feb 13 19:45:03.695289 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:45:03.718437 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:45:03.735914 (kubelet)[2137]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:45:03.738518 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:45:03.917174 systemd[1]: Startup finished in 947ms (kernel) + 10.814s (initrd) + 12.354s (userspace) = 24.116s. Feb 13 19:45:04.108886 sshd[2139]: Connection closed by 139.178.89.65 port 34922 Feb 13 19:45:04.111393 sshd-session[2131]: pam_unix(sshd:session): session closed for user core Feb 13 19:45:04.118283 systemd[1]: sshd@1-172.31.19.60:22-139.178.89.65:34922.service: Deactivated successfully. Feb 13 19:45:04.119239 systemd-logind[1873]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:45:04.122660 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:45:04.124852 systemd-logind[1873]: Removed session 2. Feb 13 19:45:04.151104 systemd[1]: Started sshd@2-172.31.19.60:22-139.178.89.65:41554.service - OpenSSH per-connection server daemon (139.178.89.65:41554). Feb 13 19:45:04.354236 sshd[2152]: Accepted publickey for core from 139.178.89.65 port 41554 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:45:04.356870 sshd-session[2152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:45:04.370169 systemd-logind[1873]: New session 3 of user core. Feb 13 19:45:04.376552 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:45:04.509248 sshd[2154]: Connection closed by 139.178.89.65 port 41554 Feb 13 19:45:04.509228 sshd-session[2152]: pam_unix(sshd:session): session closed for user core Feb 13 19:45:04.526291 systemd[1]: sshd@2-172.31.19.60:22-139.178.89.65:41554.service: Deactivated successfully. Feb 13 19:45:04.541022 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:45:04.578240 systemd-logind[1873]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:45:04.599085 systemd[1]: Started sshd@3-172.31.19.60:22-139.178.89.65:41570.service - OpenSSH per-connection server daemon (139.178.89.65:41570). Feb 13 19:45:04.603667 systemd-logind[1873]: Removed session 3. Feb 13 19:45:04.857475 sshd[2159]: Accepted publickey for core from 139.178.89.65 port 41570 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:45:04.859915 sshd-session[2159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:45:04.884756 systemd-logind[1873]: New session 4 of user core. Feb 13 19:45:04.892145 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:45:05.042485 sshd[2161]: Connection closed by 139.178.89.65 port 41570 Feb 13 19:45:05.044196 sshd-session[2159]: pam_unix(sshd:session): session closed for user core Feb 13 19:45:05.051790 systemd[1]: sshd@3-172.31.19.60:22-139.178.89.65:41570.service: Deactivated successfully. Feb 13 19:45:05.057935 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:45:05.070893 systemd-logind[1873]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:45:05.120547 systemd[1]: Started sshd@4-172.31.19.60:22-139.178.89.65:41580.service - OpenSSH per-connection server daemon (139.178.89.65:41580). Feb 13 19:45:05.125688 systemd-logind[1873]: Removed session 4. Feb 13 19:45:05.918025 systemd-resolved[1684]: Clock change detected. Flushing caches. Feb 13 19:45:06.031304 sshd[2166]: Accepted publickey for core from 139.178.89.65 port 41580 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:45:06.034731 sshd-session[2166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:45:06.042511 systemd-logind[1873]: New session 5 of user core. Feb 13 19:45:06.058085 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:45:06.134206 kubelet[2137]: E0213 19:45:06.134129 2137 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:45:06.140624 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:45:06.140942 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:45:06.141849 systemd[1]: kubelet.service: Consumed 1.044s CPU time. Feb 13 19:45:06.206219 sudo[2172]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:45:06.208179 sudo[2172]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:45:06.243470 sudo[2172]: pam_unix(sudo:session): session closed for user root Feb 13 19:45:06.267922 sshd[2170]: Connection closed by 139.178.89.65 port 41580 Feb 13 19:45:06.271384 sshd-session[2166]: pam_unix(sshd:session): session closed for user core Feb 13 19:45:06.278323 systemd[1]: sshd@4-172.31.19.60:22-139.178.89.65:41580.service: Deactivated successfully. Feb 13 19:45:06.287502 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:45:06.291715 systemd-logind[1873]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:45:06.350327 systemd[1]: Started sshd@5-172.31.19.60:22-139.178.89.65:41582.service - OpenSSH per-connection server daemon (139.178.89.65:41582). Feb 13 19:45:06.357845 systemd-logind[1873]: Removed session 5. Feb 13 19:45:06.584888 sshd[2177]: Accepted publickey for core from 139.178.89.65 port 41582 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:45:06.587624 sshd-session[2177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:45:06.600646 systemd-logind[1873]: New session 6 of user core. Feb 13 19:45:06.604378 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:45:06.729142 sudo[2181]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:45:06.730738 sudo[2181]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:45:06.739044 sudo[2181]: pam_unix(sudo:session): session closed for user root Feb 13 19:45:06.761632 sudo[2180]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:45:06.764867 sudo[2180]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:45:06.813633 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:45:06.951737 augenrules[2203]: No rules Feb 13 19:45:06.954712 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:45:06.955793 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:45:06.961303 sudo[2180]: pam_unix(sudo:session): session closed for user root Feb 13 19:45:06.985702 sshd[2179]: Connection closed by 139.178.89.65 port 41582 Feb 13 19:45:06.987653 sshd-session[2177]: pam_unix(sshd:session): session closed for user core Feb 13 19:45:06.998961 systemd[1]: sshd@5-172.31.19.60:22-139.178.89.65:41582.service: Deactivated successfully. Feb 13 19:45:07.011014 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:45:07.034338 systemd-logind[1873]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:45:07.040855 systemd[1]: Started sshd@6-172.31.19.60:22-139.178.89.65:41594.service - OpenSSH per-connection server daemon (139.178.89.65:41594). Feb 13 19:45:07.043908 systemd-logind[1873]: Removed session 6. Feb 13 19:45:07.268662 sshd[2211]: Accepted publickey for core from 139.178.89.65 port 41594 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:45:07.271892 sshd-session[2211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:45:07.294681 systemd-logind[1873]: New session 7 of user core. Feb 13 19:45:07.308096 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:45:07.435201 sudo[2214]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:45:07.435625 sudo[2214]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:45:08.303282 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:45:08.304517 (dockerd)[2231]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:45:08.857671 dockerd[2231]: time="2025-02-13T19:45:08.855260433Z" level=info msg="Starting up" Feb 13 19:45:09.197817 dockerd[2231]: time="2025-02-13T19:45:09.197526839Z" level=info msg="Loading containers: start." Feb 13 19:45:09.711911 kernel: Initializing XFRM netlink socket Feb 13 19:45:09.767571 (udev-worker)[2255]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:45:09.879119 systemd-networkd[1717]: docker0: Link UP Feb 13 19:45:09.925594 dockerd[2231]: time="2025-02-13T19:45:09.925537898Z" level=info msg="Loading containers: done." Feb 13 19:45:09.956690 dockerd[2231]: time="2025-02-13T19:45:09.956617877Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:45:09.956930 dockerd[2231]: time="2025-02-13T19:45:09.956759407Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 19:45:09.956985 dockerd[2231]: time="2025-02-13T19:45:09.956936526Z" level=info msg="Daemon has completed initialization" Feb 13 19:45:10.003038 dockerd[2231]: time="2025-02-13T19:45:10.002467685Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:45:10.002801 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:45:11.615521 containerd[1881]: time="2025-02-13T19:45:11.615100920Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 19:45:12.391372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount291159117.mount: Deactivated successfully. Feb 13 19:45:15.025269 containerd[1881]: time="2025-02-13T19:45:15.025207798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:15.026936 containerd[1881]: time="2025-02-13T19:45:15.026803751Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=27976588" Feb 13 19:45:15.028399 containerd[1881]: time="2025-02-13T19:45:15.027959375Z" level=info msg="ImageCreate event name:\"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:15.031532 containerd[1881]: time="2025-02-13T19:45:15.031480251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:15.032892 containerd[1881]: time="2025-02-13T19:45:15.032731163Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"27973388\" in 3.417208679s" Feb 13 19:45:15.033063 containerd[1881]: time="2025-02-13T19:45:15.033037861Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\"" Feb 13 19:45:15.037034 containerd[1881]: time="2025-02-13T19:45:15.036993649Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 19:45:16.391438 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:45:16.405241 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:45:16.726185 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:45:16.733479 (kubelet)[2484]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:45:16.820256 kubelet[2484]: E0213 19:45:16.820198 2484 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:45:16.829770 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:45:16.830013 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:45:17.910006 containerd[1881]: time="2025-02-13T19:45:17.909947055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:17.911523 containerd[1881]: time="2025-02-13T19:45:17.911463799Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=24708193" Feb 13 19:45:17.914279 containerd[1881]: time="2025-02-13T19:45:17.913620685Z" level=info msg="ImageCreate event name:\"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:17.919613 containerd[1881]: time="2025-02-13T19:45:17.919560227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:17.922653 containerd[1881]: time="2025-02-13T19:45:17.922542562Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"26154739\" in 2.885507354s" Feb 13 19:45:17.922653 containerd[1881]: time="2025-02-13T19:45:17.922657427Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\"" Feb 13 19:45:17.923981 containerd[1881]: time="2025-02-13T19:45:17.923949036Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 19:45:20.142694 containerd[1881]: time="2025-02-13T19:45:20.142589150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:20.145644 containerd[1881]: time="2025-02-13T19:45:20.145532548Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=18652425" Feb 13 19:45:20.146948 containerd[1881]: time="2025-02-13T19:45:20.146911621Z" level=info msg="ImageCreate event name:\"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:20.154688 containerd[1881]: time="2025-02-13T19:45:20.154627494Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"20098989\" in 2.230369503s" Feb 13 19:45:20.154688 containerd[1881]: time="2025-02-13T19:45:20.154677268Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\"" Feb 13 19:45:20.154921 containerd[1881]: time="2025-02-13T19:45:20.154839386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:20.155611 containerd[1881]: time="2025-02-13T19:45:20.155574563Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 19:45:21.647076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2525073275.mount: Deactivated successfully. Feb 13 19:45:22.542423 containerd[1881]: time="2025-02-13T19:45:22.542327657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:22.544366 containerd[1881]: time="2025-02-13T19:45:22.544177166Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=30229108" Feb 13 19:45:22.546947 containerd[1881]: time="2025-02-13T19:45:22.545715287Z" level=info msg="ImageCreate event name:\"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:22.551677 containerd[1881]: time="2025-02-13T19:45:22.551597325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:22.553191 containerd[1881]: time="2025-02-13T19:45:22.552393760Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"30228127\" in 2.396705812s" Feb 13 19:45:22.553191 containerd[1881]: time="2025-02-13T19:45:22.552441873Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\"" Feb 13 19:45:22.553363 containerd[1881]: time="2025-02-13T19:45:22.553194786Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:45:23.186344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount776925052.mount: Deactivated successfully. Feb 13 19:45:24.598486 containerd[1881]: time="2025-02-13T19:45:24.598428238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:24.600418 containerd[1881]: time="2025-02-13T19:45:24.600197302Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 19:45:24.601599 containerd[1881]: time="2025-02-13T19:45:24.601507079Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:24.605542 containerd[1881]: time="2025-02-13T19:45:24.605007249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:24.606398 containerd[1881]: time="2025-02-13T19:45:24.606353740Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.053113143s" Feb 13 19:45:24.606499 containerd[1881]: time="2025-02-13T19:45:24.606403032Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 19:45:24.607594 containerd[1881]: time="2025-02-13T19:45:24.607563128Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 19:45:25.191983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2731676734.mount: Deactivated successfully. Feb 13 19:45:25.197703 containerd[1881]: time="2025-02-13T19:45:25.197646582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:25.198897 containerd[1881]: time="2025-02-13T19:45:25.198838010Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Feb 13 19:45:25.200811 containerd[1881]: time="2025-02-13T19:45:25.199671084Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:25.202205 containerd[1881]: time="2025-02-13T19:45:25.202145592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:25.204849 containerd[1881]: time="2025-02-13T19:45:25.204605406Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 597.001595ms" Feb 13 19:45:25.204849 containerd[1881]: time="2025-02-13T19:45:25.204657930Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Feb 13 19:45:25.205720 containerd[1881]: time="2025-02-13T19:45:25.205393951Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 19:45:25.729578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3512178127.mount: Deactivated successfully. Feb 13 19:45:27.083638 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:45:27.098899 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:45:28.023428 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:45:28.043818 (kubelet)[2615]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:45:28.191831 kubelet[2615]: E0213 19:45:28.191721 2615 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:45:28.214847 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:45:28.216877 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:45:29.947623 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 19:45:30.025469 containerd[1881]: time="2025-02-13T19:45:30.025267318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:30.027207 containerd[1881]: time="2025-02-13T19:45:30.027128265Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Feb 13 19:45:30.028912 containerd[1881]: time="2025-02-13T19:45:30.028734744Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:30.033601 containerd[1881]: time="2025-02-13T19:45:30.033532105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:45:30.036150 containerd[1881]: time="2025-02-13T19:45:30.035907786Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.83046935s" Feb 13 19:45:30.036150 containerd[1881]: time="2025-02-13T19:45:30.035962517Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Feb 13 19:45:34.150286 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:45:34.162218 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:45:34.218903 systemd[1]: Reloading requested from client PID 2653 ('systemctl') (unit session-7.scope)... Feb 13 19:45:34.218983 systemd[1]: Reloading... Feb 13 19:45:34.430857 zram_generator::config[2694]: No configuration found. Feb 13 19:45:34.740735 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:45:34.883803 systemd[1]: Reloading finished in 664 ms. Feb 13 19:45:34.976031 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:45:34.976154 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:45:34.977102 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:45:34.988319 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:45:35.276513 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:45:35.287762 (kubelet)[2754]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:45:35.414398 kubelet[2754]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:45:35.414398 kubelet[2754]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:45:35.414398 kubelet[2754]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:45:35.414398 kubelet[2754]: I0213 19:45:35.413910 2754 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:45:35.883991 kubelet[2754]: I0213 19:45:35.883944 2754 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 19:45:35.883991 kubelet[2754]: I0213 19:45:35.883979 2754 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:45:35.884323 kubelet[2754]: I0213 19:45:35.884300 2754 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 19:45:35.932415 kubelet[2754]: I0213 19:45:35.932376 2754 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:45:35.938617 kubelet[2754]: E0213 19:45:35.938538 2754 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.19.60:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.19.60:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:45:35.966501 kubelet[2754]: E0213 19:45:35.966428 2754 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:45:35.966501 kubelet[2754]: I0213 19:45:35.966493 2754 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:45:35.972618 kubelet[2754]: I0213 19:45:35.972586 2754 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:45:35.983443 kubelet[2754]: I0213 19:45:35.983364 2754 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 19:45:35.983701 kubelet[2754]: I0213 19:45:35.983658 2754 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:45:35.984279 kubelet[2754]: I0213 19:45:35.983701 2754 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-60","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:45:35.984751 kubelet[2754]: I0213 19:45:35.984283 2754 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:45:35.984751 kubelet[2754]: I0213 19:45:35.984302 2754 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 19:45:35.984877 kubelet[2754]: I0213 19:45:35.984785 2754 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:45:35.991064 kubelet[2754]: I0213 19:45:35.991013 2754 kubelet.go:408] "Attempting to sync node with API server" Feb 13 19:45:35.991064 kubelet[2754]: I0213 19:45:35.991067 2754 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:45:35.991264 kubelet[2754]: I0213 19:45:35.991110 2754 kubelet.go:314] "Adding apiserver pod source" Feb 13 19:45:35.991264 kubelet[2754]: I0213 19:45:35.991124 2754 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:45:35.998746 kubelet[2754]: W0213 19:45:35.998263 2754 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.19.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-60&limit=500&resourceVersion=0": dial tcp 172.31.19.60:6443: connect: connection refused Feb 13 19:45:35.998746 kubelet[2754]: E0213 19:45:35.998370 2754 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.19.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-60&limit=500&resourceVersion=0\": dial tcp 172.31.19.60:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:45:36.001476 kubelet[2754]: W0213 19:45:36.001079 2754 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.19.60:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.19.60:6443: connect: connection refused Feb 13 19:45:36.001476 kubelet[2754]: E0213 19:45:36.001198 2754 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.19.60:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.19.60:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:45:36.001476 kubelet[2754]: I0213 19:45:36.001324 2754 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:45:36.004909 kubelet[2754]: I0213 19:45:36.004736 2754 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:45:36.005065 kubelet[2754]: W0213 19:45:36.004876 2754 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:45:36.008276 kubelet[2754]: I0213 19:45:36.008235 2754 server.go:1269] "Started kubelet" Feb 13 19:45:36.013801 kubelet[2754]: I0213 19:45:36.012756 2754 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:45:36.015071 kubelet[2754]: I0213 19:45:36.015042 2754 server.go:460] "Adding debug handlers to kubelet server" Feb 13 19:45:36.018467 kubelet[2754]: I0213 19:45:36.017934 2754 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:45:36.018673 kubelet[2754]: I0213 19:45:36.018651 2754 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:45:36.020687 kubelet[2754]: I0213 19:45:36.020664 2754 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:45:36.030441 kubelet[2754]: I0213 19:45:36.030407 2754 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:45:36.031495 kubelet[2754]: I0213 19:45:36.031466 2754 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 19:45:36.031724 kubelet[2754]: E0213 19:45:36.020620 2754 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.60:6443/api/v1/namespaces/default/events\": dial tcp 172.31.19.60:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-19-60.1823dc25cd1bf219 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-60,UID:ip-172-31-19-60,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-60,},FirstTimestamp:2025-02-13 19:45:36.008204825 +0000 UTC m=+0.714364065,LastTimestamp:2025-02-13 19:45:36.008204825 +0000 UTC m=+0.714364065,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-60,}" Feb 13 19:45:36.034106 kubelet[2754]: E0213 19:45:36.033898 2754 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-19-60\" not found" Feb 13 19:45:36.035326 kubelet[2754]: I0213 19:45:36.035250 2754 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 19:45:36.039831 kubelet[2754]: E0213 19:45:36.037671 2754 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-60?timeout=10s\": dial tcp 172.31.19.60:6443: connect: connection refused" interval="200ms" Feb 13 19:45:36.043915 kubelet[2754]: I0213 19:45:36.040727 2754 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:45:36.044299 kubelet[2754]: I0213 19:45:36.044278 2754 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:45:36.044498 kubelet[2754]: I0213 19:45:36.044477 2754 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:45:36.045559 kubelet[2754]: W0213 19:45:36.045513 2754 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.19.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.60:6443: connect: connection refused Feb 13 19:45:36.046668 kubelet[2754]: E0213 19:45:36.046640 2754 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.19.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.19.60:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:45:36.048449 kubelet[2754]: E0213 19:45:36.047092 2754 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:45:36.048449 kubelet[2754]: I0213 19:45:36.047278 2754 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:45:36.129755 kubelet[2754]: I0213 19:45:36.129687 2754 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:45:36.134996 kubelet[2754]: E0213 19:45:36.134186 2754 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-19-60\" not found" Feb 13 19:45:36.134996 kubelet[2754]: I0213 19:45:36.134186 2754 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:45:36.134996 kubelet[2754]: I0213 19:45:36.134241 2754 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:45:36.134996 kubelet[2754]: I0213 19:45:36.134262 2754 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 19:45:36.134996 kubelet[2754]: E0213 19:45:36.134318 2754 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:45:36.140459 kubelet[2754]: W0213 19:45:36.140394 2754 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.19.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.60:6443: connect: connection refused Feb 13 19:45:36.143414 kubelet[2754]: E0213 19:45:36.143372 2754 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.19.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.19.60:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:45:36.149878 kubelet[2754]: I0213 19:45:36.149715 2754 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:45:36.150839 kubelet[2754]: I0213 19:45:36.150816 2754 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:45:36.151065 kubelet[2754]: I0213 19:45:36.151047 2754 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:45:36.154801 kubelet[2754]: I0213 19:45:36.154754 2754 policy_none.go:49] "None policy: Start" Feb 13 19:45:36.156320 kubelet[2754]: I0213 19:45:36.156291 2754 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:45:36.156320 kubelet[2754]: I0213 19:45:36.156325 2754 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:45:36.164885 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:45:36.181627 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:45:36.189121 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:45:36.201222 kubelet[2754]: I0213 19:45:36.201188 2754 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:45:36.202109 kubelet[2754]: I0213 19:45:36.201452 2754 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:45:36.202109 kubelet[2754]: I0213 19:45:36.201467 2754 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:45:36.202109 kubelet[2754]: I0213 19:45:36.201921 2754 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:45:36.206057 kubelet[2754]: E0213 19:45:36.205925 2754 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-19-60\" not found" Feb 13 19:45:36.240349 kubelet[2754]: E0213 19:45:36.240306 2754 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-60?timeout=10s\": dial tcp 172.31.19.60:6443: connect: connection refused" interval="400ms" Feb 13 19:45:36.255373 systemd[1]: Created slice kubepods-burstable-pod9ed52e1609f9428278f9ea7b29ff66a3.slice - libcontainer container kubepods-burstable-pod9ed52e1609f9428278f9ea7b29ff66a3.slice. Feb 13 19:45:36.272224 systemd[1]: Created slice kubepods-burstable-pod0e0224d6ea849b3a888b6d6920e5fd09.slice - libcontainer container kubepods-burstable-pod0e0224d6ea849b3a888b6d6920e5fd09.slice. Feb 13 19:45:36.285489 systemd[1]: Created slice kubepods-burstable-pod605fe5ee921698279e7b9ff2cdd0ccb8.slice - libcontainer container kubepods-burstable-pod605fe5ee921698279e7b9ff2cdd0ccb8.slice. Feb 13 19:45:36.304366 kubelet[2754]: I0213 19:45:36.304315 2754 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-60" Feb 13 19:45:36.304981 kubelet[2754]: E0213 19:45:36.304947 2754 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.19.60:6443/api/v1/nodes\": dial tcp 172.31.19.60:6443: connect: connection refused" node="ip-172-31-19-60" Feb 13 19:45:36.344542 kubelet[2754]: I0213 19:45:36.344495 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9ed52e1609f9428278f9ea7b29ff66a3-ca-certs\") pod \"kube-apiserver-ip-172-31-19-60\" (UID: \"9ed52e1609f9428278f9ea7b29ff66a3\") " pod="kube-system/kube-apiserver-ip-172-31-19-60" Feb 13 19:45:36.344542 kubelet[2754]: I0213 19:45:36.344543 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0e0224d6ea849b3a888b6d6920e5fd09-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-60\" (UID: \"0e0224d6ea849b3a888b6d6920e5fd09\") " pod="kube-system/kube-controller-manager-ip-172-31-19-60" Feb 13 19:45:36.344964 kubelet[2754]: I0213 19:45:36.344573 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0e0224d6ea849b3a888b6d6920e5fd09-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-60\" (UID: \"0e0224d6ea849b3a888b6d6920e5fd09\") " pod="kube-system/kube-controller-manager-ip-172-31-19-60" Feb 13 19:45:36.344964 kubelet[2754]: I0213 19:45:36.344602 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/605fe5ee921698279e7b9ff2cdd0ccb8-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-60\" (UID: \"605fe5ee921698279e7b9ff2cdd0ccb8\") " pod="kube-system/kube-scheduler-ip-172-31-19-60" Feb 13 19:45:36.344964 kubelet[2754]: I0213 19:45:36.344627 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9ed52e1609f9428278f9ea7b29ff66a3-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-60\" (UID: \"9ed52e1609f9428278f9ea7b29ff66a3\") " pod="kube-system/kube-apiserver-ip-172-31-19-60" Feb 13 19:45:36.344964 kubelet[2754]: I0213 19:45:36.344648 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9ed52e1609f9428278f9ea7b29ff66a3-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-60\" (UID: \"9ed52e1609f9428278f9ea7b29ff66a3\") " pod="kube-system/kube-apiserver-ip-172-31-19-60" Feb 13 19:45:36.344964 kubelet[2754]: I0213 19:45:36.344801 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0e0224d6ea849b3a888b6d6920e5fd09-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-60\" (UID: \"0e0224d6ea849b3a888b6d6920e5fd09\") " pod="kube-system/kube-controller-manager-ip-172-31-19-60" Feb 13 19:45:36.345301 kubelet[2754]: I0213 19:45:36.344828 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0e0224d6ea849b3a888b6d6920e5fd09-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-60\" (UID: \"0e0224d6ea849b3a888b6d6920e5fd09\") " pod="kube-system/kube-controller-manager-ip-172-31-19-60" Feb 13 19:45:36.345301 kubelet[2754]: I0213 19:45:36.344853 2754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0e0224d6ea849b3a888b6d6920e5fd09-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-60\" (UID: \"0e0224d6ea849b3a888b6d6920e5fd09\") " pod="kube-system/kube-controller-manager-ip-172-31-19-60" Feb 13 19:45:36.508579 kubelet[2754]: I0213 19:45:36.508452 2754 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-60" Feb 13 19:45:36.509457 kubelet[2754]: E0213 19:45:36.509413 2754 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.19.60:6443/api/v1/nodes\": dial tcp 172.31.19.60:6443: connect: connection refused" node="ip-172-31-19-60" Feb 13 19:45:36.570969 containerd[1881]: time="2025-02-13T19:45:36.570627832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-60,Uid:9ed52e1609f9428278f9ea7b29ff66a3,Namespace:kube-system,Attempt:0,}" Feb 13 19:45:36.584404 containerd[1881]: time="2025-02-13T19:45:36.584363923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-60,Uid:0e0224d6ea849b3a888b6d6920e5fd09,Namespace:kube-system,Attempt:0,}" Feb 13 19:45:36.589597 containerd[1881]: time="2025-02-13T19:45:36.589554924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-60,Uid:605fe5ee921698279e7b9ff2cdd0ccb8,Namespace:kube-system,Attempt:0,}" Feb 13 19:45:36.643113 kubelet[2754]: E0213 19:45:36.642701 2754 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-60?timeout=10s\": dial tcp 172.31.19.60:6443: connect: connection refused" interval="800ms" Feb 13 19:45:36.917680 kubelet[2754]: I0213 19:45:36.917522 2754 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-60" Feb 13 19:45:36.918976 kubelet[2754]: E0213 19:45:36.918102 2754 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.19.60:6443/api/v1/nodes\": dial tcp 172.31.19.60:6443: connect: connection refused" node="ip-172-31-19-60" Feb 13 19:45:36.993881 kubelet[2754]: W0213 19:45:36.993809 2754 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.19.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-60&limit=500&resourceVersion=0": dial tcp 172.31.19.60:6443: connect: connection refused Feb 13 19:45:36.993881 kubelet[2754]: E0213 19:45:36.993885 2754 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.19.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-60&limit=500&resourceVersion=0\": dial tcp 172.31.19.60:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:45:37.061669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount761928912.mount: Deactivated successfully. Feb 13 19:45:37.069290 containerd[1881]: time="2025-02-13T19:45:37.069048432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:45:37.072008 containerd[1881]: time="2025-02-13T19:45:37.071958272Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:45:37.085851 containerd[1881]: time="2025-02-13T19:45:37.084360904Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 19:45:37.085851 containerd[1881]: time="2025-02-13T19:45:37.085307279Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:45:37.085851 containerd[1881]: time="2025-02-13T19:45:37.085388439Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:45:37.085851 containerd[1881]: time="2025-02-13T19:45:37.085768829Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:45:37.089940 containerd[1881]: time="2025-02-13T19:45:37.089806323Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:45:37.092872 containerd[1881]: time="2025-02-13T19:45:37.092826071Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 513.8171ms" Feb 13 19:45:37.098554 containerd[1881]: time="2025-02-13T19:45:37.096175612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:45:37.099149 containerd[1881]: time="2025-02-13T19:45:37.099101442Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 509.360294ms" Feb 13 19:45:37.101116 containerd[1881]: time="2025-02-13T19:45:37.100789406Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 516.305604ms" Feb 13 19:45:37.258279 kubelet[2754]: W0213 19:45:37.257387 2754 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.19.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.60:6443: connect: connection refused Feb 13 19:45:37.258279 kubelet[2754]: E0213 19:45:37.257467 2754 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.19.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.19.60:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:45:37.321593 kubelet[2754]: W0213 19:45:37.321524 2754 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.19.60:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.19.60:6443: connect: connection refused Feb 13 19:45:37.323101 kubelet[2754]: E0213 19:45:37.323071 2754 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.19.60:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.19.60:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:45:37.372089 containerd[1881]: time="2025-02-13T19:45:37.364028624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:45:37.373625 containerd[1881]: time="2025-02-13T19:45:37.371389455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:45:37.373625 containerd[1881]: time="2025-02-13T19:45:37.372822039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:45:37.373625 containerd[1881]: time="2025-02-13T19:45:37.372843724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:45:37.373625 containerd[1881]: time="2025-02-13T19:45:37.373031478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:45:37.375376 containerd[1881]: time="2025-02-13T19:45:37.373423676Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:45:37.375376 containerd[1881]: time="2025-02-13T19:45:37.373468088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:45:37.375376 containerd[1881]: time="2025-02-13T19:45:37.373574270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:45:37.422168 containerd[1881]: time="2025-02-13T19:45:37.421854185Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:45:37.425345 containerd[1881]: time="2025-02-13T19:45:37.425250057Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:45:37.425847 containerd[1881]: time="2025-02-13T19:45:37.425370661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:45:37.425847 containerd[1881]: time="2025-02-13T19:45:37.425626822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:45:37.445970 kubelet[2754]: E0213 19:45:37.445908 2754 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-60?timeout=10s\": dial tcp 172.31.19.60:6443: connect: connection refused" interval="1.6s" Feb 13 19:45:37.453850 kubelet[2754]: W0213 19:45:37.453804 2754 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.19.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.60:6443: connect: connection refused Feb 13 19:45:37.454470 kubelet[2754]: E0213 19:45:37.454347 2754 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.19.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.19.60:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:45:37.479936 systemd[1]: Started cri-containerd-9f2d8d68f1f15deaab479971cb6eadbd0e852541f895948b7d2c05e20bfcee88.scope - libcontainer container 9f2d8d68f1f15deaab479971cb6eadbd0e852541f895948b7d2c05e20bfcee88. Feb 13 19:45:37.482130 systemd[1]: Started cri-containerd-b520102d0db080e588c9f133a4d9c245ed8b735fb04d1ef41f5f2699a8fb79d3.scope - libcontainer container b520102d0db080e588c9f133a4d9c245ed8b735fb04d1ef41f5f2699a8fb79d3. Feb 13 19:45:37.495723 systemd[1]: Started cri-containerd-fb2bb1a311a751d4ab8a5701e5412d648de56e2e5196b6c8e5e306635a6bda84.scope - libcontainer container fb2bb1a311a751d4ab8a5701e5412d648de56e2e5196b6c8e5e306635a6bda84. Feb 13 19:45:37.617926 containerd[1881]: time="2025-02-13T19:45:37.616414290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-60,Uid:605fe5ee921698279e7b9ff2cdd0ccb8,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f2d8d68f1f15deaab479971cb6eadbd0e852541f895948b7d2c05e20bfcee88\"" Feb 13 19:45:37.621348 containerd[1881]: time="2025-02-13T19:45:37.620102206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-60,Uid:0e0224d6ea849b3a888b6d6920e5fd09,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb2bb1a311a751d4ab8a5701e5412d648de56e2e5196b6c8e5e306635a6bda84\"" Feb 13 19:45:37.623995 containerd[1881]: time="2025-02-13T19:45:37.623960523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-60,Uid:9ed52e1609f9428278f9ea7b29ff66a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"b520102d0db080e588c9f133a4d9c245ed8b735fb04d1ef41f5f2699a8fb79d3\"" Feb 13 19:45:37.629207 containerd[1881]: time="2025-02-13T19:45:37.629171414Z" level=info msg="CreateContainer within sandbox \"9f2d8d68f1f15deaab479971cb6eadbd0e852541f895948b7d2c05e20bfcee88\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:45:37.635164 containerd[1881]: time="2025-02-13T19:45:37.635123138Z" level=info msg="CreateContainer within sandbox \"fb2bb1a311a751d4ab8a5701e5412d648de56e2e5196b6c8e5e306635a6bda84\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:45:37.637640 containerd[1881]: time="2025-02-13T19:45:37.637377499Z" level=info msg="CreateContainer within sandbox \"b520102d0db080e588c9f133a4d9c245ed8b735fb04d1ef41f5f2699a8fb79d3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:45:37.657551 containerd[1881]: time="2025-02-13T19:45:37.657072247Z" level=info msg="CreateContainer within sandbox \"9f2d8d68f1f15deaab479971cb6eadbd0e852541f895948b7d2c05e20bfcee88\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f44560c6e4d6f135ae2b035dfae753c887c133682825deb7db8e8d5b317a7cb5\"" Feb 13 19:45:37.658532 containerd[1881]: time="2025-02-13T19:45:37.658418566Z" level=info msg="StartContainer for \"f44560c6e4d6f135ae2b035dfae753c887c133682825deb7db8e8d5b317a7cb5\"" Feb 13 19:45:37.662271 containerd[1881]: time="2025-02-13T19:45:37.662098302Z" level=info msg="CreateContainer within sandbox \"fb2bb1a311a751d4ab8a5701e5412d648de56e2e5196b6c8e5e306635a6bda84\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ff6e8910a01fd90074ebd38c2a73424088930759304472993a03af3b7d4dfe3a\"" Feb 13 19:45:37.662647 containerd[1881]: time="2025-02-13T19:45:37.662624993Z" level=info msg="StartContainer for \"ff6e8910a01fd90074ebd38c2a73424088930759304472993a03af3b7d4dfe3a\"" Feb 13 19:45:37.668715 containerd[1881]: time="2025-02-13T19:45:37.668440569Z" level=info msg="CreateContainer within sandbox \"b520102d0db080e588c9f133a4d9c245ed8b735fb04d1ef41f5f2699a8fb79d3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9e4346abcb206a3187a3e5cc732e9c69884b3b93f8307eb564e13ea47f1b34e4\"" Feb 13 19:45:37.669432 containerd[1881]: time="2025-02-13T19:45:37.669337244Z" level=info msg="StartContainer for \"9e4346abcb206a3187a3e5cc732e9c69884b3b93f8307eb564e13ea47f1b34e4\"" Feb 13 19:45:37.721042 systemd[1]: Started cri-containerd-ff6e8910a01fd90074ebd38c2a73424088930759304472993a03af3b7d4dfe3a.scope - libcontainer container ff6e8910a01fd90074ebd38c2a73424088930759304472993a03af3b7d4dfe3a. Feb 13 19:45:37.728530 kubelet[2754]: I0213 19:45:37.728289 2754 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-60" Feb 13 19:45:37.732658 kubelet[2754]: E0213 19:45:37.731249 2754 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.19.60:6443/api/v1/nodes\": dial tcp 172.31.19.60:6443: connect: connection refused" node="ip-172-31-19-60" Feb 13 19:45:37.732096 systemd[1]: Started cri-containerd-f44560c6e4d6f135ae2b035dfae753c887c133682825deb7db8e8d5b317a7cb5.scope - libcontainer container f44560c6e4d6f135ae2b035dfae753c887c133682825deb7db8e8d5b317a7cb5. Feb 13 19:45:37.751347 systemd[1]: Started cri-containerd-9e4346abcb206a3187a3e5cc732e9c69884b3b93f8307eb564e13ea47f1b34e4.scope - libcontainer container 9e4346abcb206a3187a3e5cc732e9c69884b3b93f8307eb564e13ea47f1b34e4. Feb 13 19:45:37.880027 containerd[1881]: time="2025-02-13T19:45:37.879704693Z" level=info msg="StartContainer for \"ff6e8910a01fd90074ebd38c2a73424088930759304472993a03af3b7d4dfe3a\" returns successfully" Feb 13 19:45:37.890625 containerd[1881]: time="2025-02-13T19:45:37.889751615Z" level=info msg="StartContainer for \"9e4346abcb206a3187a3e5cc732e9c69884b3b93f8307eb564e13ea47f1b34e4\" returns successfully" Feb 13 19:45:37.907846 containerd[1881]: time="2025-02-13T19:45:37.907066016Z" level=info msg="StartContainer for \"f44560c6e4d6f135ae2b035dfae753c887c133682825deb7db8e8d5b317a7cb5\" returns successfully" Feb 13 19:45:38.117601 kubelet[2754]: E0213 19:45:38.117531 2754 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.19.60:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.19.60:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:45:39.337097 kubelet[2754]: I0213 19:45:39.335039 2754 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-60" Feb 13 19:45:41.750529 kubelet[2754]: E0213 19:45:41.750437 2754 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-19-60\" not found" node="ip-172-31-19-60" Feb 13 19:45:41.831297 kubelet[2754]: I0213 19:45:41.831251 2754 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-19-60" Feb 13 19:45:42.007181 kubelet[2754]: I0213 19:45:42.006808 2754 apiserver.go:52] "Watching apiserver" Feb 13 19:45:42.038374 kubelet[2754]: I0213 19:45:42.038285 2754 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 19:45:44.393435 systemd[1]: Reloading requested from client PID 3031 ('systemctl') (unit session-7.scope)... Feb 13 19:45:44.393461 systemd[1]: Reloading... Feb 13 19:45:44.466751 update_engine[1875]: I20250213 19:45:44.465837 1875 update_attempter.cc:509] Updating boot flags... Feb 13 19:45:44.594857 zram_generator::config[3078]: No configuration found. Feb 13 19:45:44.611020 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3086) Feb 13 19:45:44.992028 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3069) Feb 13 19:45:45.015122 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:45:45.314972 systemd[1]: Reloading finished in 920 ms. Feb 13 19:45:45.462861 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3069) Feb 13 19:45:45.547243 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:45:45.592053 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:45:45.592425 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:45:45.592522 systemd[1]: kubelet.service: Consumed 1.016s CPU time, 113.4M memory peak, 0B memory swap peak. Feb 13 19:45:45.613376 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:45:46.063032 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:45:46.086431 (kubelet)[3394]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:45:46.191365 kubelet[3394]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:45:46.191365 kubelet[3394]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:45:46.191365 kubelet[3394]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:45:46.193026 kubelet[3394]: I0213 19:45:46.191445 3394 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:45:46.234454 kubelet[3394]: I0213 19:45:46.234373 3394 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 19:45:46.235365 kubelet[3394]: I0213 19:45:46.235250 3394 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:45:46.236276 kubelet[3394]: I0213 19:45:46.236236 3394 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 19:45:46.238767 kubelet[3394]: I0213 19:45:46.238724 3394 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:45:46.247092 kubelet[3394]: I0213 19:45:46.247055 3394 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:45:46.255446 kubelet[3394]: E0213 19:45:46.255383 3394 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:45:46.255709 kubelet[3394]: I0213 19:45:46.255691 3394 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:45:46.261550 kubelet[3394]: I0213 19:45:46.261519 3394 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:45:46.263634 kubelet[3394]: I0213 19:45:46.262954 3394 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 19:45:46.263634 kubelet[3394]: I0213 19:45:46.263145 3394 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:45:46.263634 kubelet[3394]: I0213 19:45:46.263186 3394 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-60","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:45:46.263634 kubelet[3394]: I0213 19:45:46.263434 3394 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:45:46.265008 kubelet[3394]: I0213 19:45:46.263449 3394 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 19:45:46.268900 kubelet[3394]: I0213 19:45:46.268862 3394 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:45:46.274235 kubelet[3394]: I0213 19:45:46.273450 3394 kubelet.go:408] "Attempting to sync node with API server" Feb 13 19:45:46.274235 kubelet[3394]: I0213 19:45:46.273490 3394 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:45:46.274235 kubelet[3394]: I0213 19:45:46.273537 3394 kubelet.go:314] "Adding apiserver pod source" Feb 13 19:45:46.274235 kubelet[3394]: I0213 19:45:46.273664 3394 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:45:46.332569 kubelet[3394]: I0213 19:45:46.332320 3394 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:45:46.333574 kubelet[3394]: I0213 19:45:46.333025 3394 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:45:46.339171 kubelet[3394]: I0213 19:45:46.339003 3394 server.go:1269] "Started kubelet" Feb 13 19:45:46.343216 kubelet[3394]: I0213 19:45:46.340680 3394 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:45:46.344534 kubelet[3394]: I0213 19:45:46.344411 3394 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:45:46.345836 kubelet[3394]: I0213 19:45:46.345521 3394 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:45:46.347646 kubelet[3394]: I0213 19:45:46.347627 3394 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:45:46.350681 kubelet[3394]: I0213 19:45:46.350647 3394 server.go:460] "Adding debug handlers to kubelet server" Feb 13 19:45:46.365237 sudo[3408]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 19:45:46.366686 sudo[3408]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 19:45:46.374417 kubelet[3394]: I0213 19:45:46.370907 3394 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:45:46.375567 kubelet[3394]: I0213 19:45:46.375542 3394 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 19:45:46.377084 kubelet[3394]: E0213 19:45:46.377043 3394 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:45:46.380968 kubelet[3394]: I0213 19:45:46.378518 3394 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 19:45:46.381426 kubelet[3394]: I0213 19:45:46.378728 3394 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:45:46.382678 kubelet[3394]: I0213 19:45:46.382639 3394 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:45:46.384127 kubelet[3394]: I0213 19:45:46.383746 3394 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:45:46.395656 kubelet[3394]: I0213 19:45:46.395626 3394 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:45:46.446328 kubelet[3394]: I0213 19:45:46.446255 3394 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:45:46.453015 kubelet[3394]: I0213 19:45:46.452718 3394 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:45:46.453015 kubelet[3394]: I0213 19:45:46.452981 3394 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:45:46.453261 kubelet[3394]: I0213 19:45:46.453009 3394 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 19:45:46.453261 kubelet[3394]: E0213 19:45:46.453131 3394 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:45:46.558161 kubelet[3394]: E0213 19:45:46.553315 3394 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:45:46.645737 kubelet[3394]: I0213 19:45:46.645605 3394 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:45:46.645737 kubelet[3394]: I0213 19:45:46.645646 3394 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:45:46.645737 kubelet[3394]: I0213 19:45:46.645674 3394 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:45:46.646795 kubelet[3394]: I0213 19:45:46.646031 3394 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:45:46.646795 kubelet[3394]: I0213 19:45:46.646051 3394 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:45:46.646795 kubelet[3394]: I0213 19:45:46.646079 3394 policy_none.go:49] "None policy: Start" Feb 13 19:45:46.647284 kubelet[3394]: I0213 19:45:46.647266 3394 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:45:46.647358 kubelet[3394]: I0213 19:45:46.647294 3394 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:45:46.647801 kubelet[3394]: I0213 19:45:46.647559 3394 state_mem.go:75] "Updated machine memory state" Feb 13 19:45:46.658619 kubelet[3394]: I0213 19:45:46.658585 3394 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:45:46.658897 kubelet[3394]: I0213 19:45:46.658881 3394 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:45:46.658972 kubelet[3394]: I0213 19:45:46.658901 3394 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:45:46.659672 kubelet[3394]: I0213 19:45:46.659645 3394 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:45:46.792836 kubelet[3394]: I0213 19:45:46.792634 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0e0224d6ea849b3a888b6d6920e5fd09-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-60\" (UID: \"0e0224d6ea849b3a888b6d6920e5fd09\") " pod="kube-system/kube-controller-manager-ip-172-31-19-60" Feb 13 19:45:46.792836 kubelet[3394]: I0213 19:45:46.792690 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0e0224d6ea849b3a888b6d6920e5fd09-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-60\" (UID: \"0e0224d6ea849b3a888b6d6920e5fd09\") " pod="kube-system/kube-controller-manager-ip-172-31-19-60" Feb 13 19:45:46.792836 kubelet[3394]: I0213 19:45:46.792718 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0e0224d6ea849b3a888b6d6920e5fd09-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-60\" (UID: \"0e0224d6ea849b3a888b6d6920e5fd09\") " pod="kube-system/kube-controller-manager-ip-172-31-19-60" Feb 13 19:45:46.792836 kubelet[3394]: I0213 19:45:46.792832 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0e0224d6ea849b3a888b6d6920e5fd09-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-60\" (UID: \"0e0224d6ea849b3a888b6d6920e5fd09\") " pod="kube-system/kube-controller-manager-ip-172-31-19-60" Feb 13 19:45:46.793378 kubelet[3394]: I0213 19:45:46.792861 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/605fe5ee921698279e7b9ff2cdd0ccb8-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-60\" (UID: \"605fe5ee921698279e7b9ff2cdd0ccb8\") " pod="kube-system/kube-scheduler-ip-172-31-19-60" Feb 13 19:45:46.793378 kubelet[3394]: I0213 19:45:46.792885 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9ed52e1609f9428278f9ea7b29ff66a3-ca-certs\") pod \"kube-apiserver-ip-172-31-19-60\" (UID: \"9ed52e1609f9428278f9ea7b29ff66a3\") " pod="kube-system/kube-apiserver-ip-172-31-19-60" Feb 13 19:45:46.793378 kubelet[3394]: I0213 19:45:46.792908 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9ed52e1609f9428278f9ea7b29ff66a3-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-60\" (UID: \"9ed52e1609f9428278f9ea7b29ff66a3\") " pod="kube-system/kube-apiserver-ip-172-31-19-60" Feb 13 19:45:46.793378 kubelet[3394]: I0213 19:45:46.792931 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9ed52e1609f9428278f9ea7b29ff66a3-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-60\" (UID: \"9ed52e1609f9428278f9ea7b29ff66a3\") " pod="kube-system/kube-apiserver-ip-172-31-19-60" Feb 13 19:45:46.793378 kubelet[3394]: I0213 19:45:46.792958 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0e0224d6ea849b3a888b6d6920e5fd09-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-60\" (UID: \"0e0224d6ea849b3a888b6d6920e5fd09\") " pod="kube-system/kube-controller-manager-ip-172-31-19-60" Feb 13 19:45:46.796920 kubelet[3394]: I0213 19:45:46.796403 3394 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-60" Feb 13 19:45:46.810413 kubelet[3394]: I0213 19:45:46.809990 3394 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-19-60" Feb 13 19:45:46.810413 kubelet[3394]: I0213 19:45:46.810100 3394 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-19-60" Feb 13 19:45:47.278379 kubelet[3394]: I0213 19:45:47.276086 3394 apiserver.go:52] "Watching apiserver" Feb 13 19:45:47.281594 kubelet[3394]: I0213 19:45:47.281550 3394 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 19:45:47.416748 kubelet[3394]: I0213 19:45:47.416406 3394 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-19-60" podStartSLOduration=1.416362113 podStartE2EDuration="1.416362113s" podCreationTimestamp="2025-02-13 19:45:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:45:47.385489866 +0000 UTC m=+1.277967502" watchObservedRunningTime="2025-02-13 19:45:47.416362113 +0000 UTC m=+1.308839739" Feb 13 19:45:47.424273 kubelet[3394]: I0213 19:45:47.418941 3394 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-19-60" podStartSLOduration=1.418917874 podStartE2EDuration="1.418917874s" podCreationTimestamp="2025-02-13 19:45:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:45:47.416104005 +0000 UTC m=+1.308581640" watchObservedRunningTime="2025-02-13 19:45:47.418917874 +0000 UTC m=+1.311395949" Feb 13 19:45:47.482256 kubelet[3394]: I0213 19:45:47.482180 3394 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-19-60" podStartSLOduration=1.474570047 podStartE2EDuration="1.474570047s" podCreationTimestamp="2025-02-13 19:45:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:45:47.448029409 +0000 UTC m=+1.340507045" watchObservedRunningTime="2025-02-13 19:45:47.474570047 +0000 UTC m=+1.367047682" Feb 13 19:45:47.759793 sudo[3408]: pam_unix(sudo:session): session closed for user root Feb 13 19:45:48.612456 kubelet[3394]: I0213 19:45:48.612415 3394 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:45:48.613160 containerd[1881]: time="2025-02-13T19:45:48.612870649Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:45:48.613997 kubelet[3394]: I0213 19:45:48.613957 3394 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:45:49.608674 kubelet[3394]: W0213 19:45:49.608630 3394 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-19-60" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-19-60' and this object Feb 13 19:45:49.608861 kubelet[3394]: E0213 19:45:49.608690 3394 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-19-60\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-19-60' and this object" logger="UnhandledError" Feb 13 19:45:49.608861 kubelet[3394]: W0213 19:45:49.608759 3394 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-19-60" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-19-60' and this object Feb 13 19:45:49.608861 kubelet[3394]: E0213 19:45:49.608792 3394 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ip-172-31-19-60\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-19-60' and this object" logger="UnhandledError" Feb 13 19:45:49.622333 systemd[1]: Created slice kubepods-besteffort-pod72cfed29_dada_4de9_b147_f6bc7c6856bd.slice - libcontainer container kubepods-besteffort-pod72cfed29_dada_4de9_b147_f6bc7c6856bd.slice. Feb 13 19:45:49.630646 kubelet[3394]: I0213 19:45:49.630599 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72cfed29-dada-4de9-b147-f6bc7c6856bd-xtables-lock\") pod \"kube-proxy-kjh9n\" (UID: \"72cfed29-dada-4de9-b147-f6bc7c6856bd\") " pod="kube-system/kube-proxy-kjh9n" Feb 13 19:45:49.631116 kubelet[3394]: I0213 19:45:49.630678 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk5tt\" (UniqueName: \"kubernetes.io/projected/72cfed29-dada-4de9-b147-f6bc7c6856bd-kube-api-access-tk5tt\") pod \"kube-proxy-kjh9n\" (UID: \"72cfed29-dada-4de9-b147-f6bc7c6856bd\") " pod="kube-system/kube-proxy-kjh9n" Feb 13 19:45:49.631116 kubelet[3394]: I0213 19:45:49.630713 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/72cfed29-dada-4de9-b147-f6bc7c6856bd-kube-proxy\") pod \"kube-proxy-kjh9n\" (UID: \"72cfed29-dada-4de9-b147-f6bc7c6856bd\") " pod="kube-system/kube-proxy-kjh9n" Feb 13 19:45:49.631116 kubelet[3394]: I0213 19:45:49.630736 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72cfed29-dada-4de9-b147-f6bc7c6856bd-lib-modules\") pod \"kube-proxy-kjh9n\" (UID: \"72cfed29-dada-4de9-b147-f6bc7c6856bd\") " pod="kube-system/kube-proxy-kjh9n" Feb 13 19:45:49.658203 systemd[1]: Created slice kubepods-burstable-pod15320574_c84b_487c_aaed_c139223db200.slice - libcontainer container kubepods-burstable-pod15320574_c84b_487c_aaed_c139223db200.slice. Feb 13 19:45:49.701224 kubelet[3394]: W0213 19:45:49.701156 3394 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-19-60" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-19-60' and this object Feb 13 19:45:49.701224 kubelet[3394]: E0213 19:45:49.701214 3394 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ip-172-31-19-60\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-19-60' and this object" logger="UnhandledError" Feb 13 19:45:49.701455 kubelet[3394]: W0213 19:45:49.701330 3394 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-19-60" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-19-60' and this object Feb 13 19:45:49.701455 kubelet[3394]: E0213 19:45:49.701348 3394 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ip-172-31-19-60\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-19-60' and this object" logger="UnhandledError" Feb 13 19:45:49.725531 kubelet[3394]: W0213 19:45:49.725487 3394 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-19-60" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-19-60' and this object Feb 13 19:45:49.725711 kubelet[3394]: E0213 19:45:49.725563 3394 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ip-172-31-19-60\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-19-60' and this object" logger="UnhandledError" Feb 13 19:45:49.732060 kubelet[3394]: I0213 19:45:49.732008 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/15320574-c84b-487c-aaed-c139223db200-cni-path\") pod \"cilium-nptww\" (UID: \"15320574-c84b-487c-aaed-c139223db200\") " pod="kube-system/cilium-nptww" Feb 13 19:45:49.732240 kubelet[3394]: I0213 19:45:49.732098 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/15320574-c84b-487c-aaed-c139223db200-cilium-config-path\") pod \"cilium-nptww\" (UID: \"15320574-c84b-487c-aaed-c139223db200\") " pod="kube-system/cilium-nptww" Feb 13 19:45:49.732240 kubelet[3394]: I0213 19:45:49.732156 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/15320574-c84b-487c-aaed-c139223db200-host-proc-sys-net\") pod \"cilium-nptww\" (UID: \"15320574-c84b-487c-aaed-c139223db200\") " pod="kube-system/cilium-nptww" Feb 13 19:45:49.732401 kubelet[3394]: I0213 19:45:49.732364 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/15320574-c84b-487c-aaed-c139223db200-bpf-maps\") pod \"cilium-nptww\" (UID: \"15320574-c84b-487c-aaed-c139223db200\") " pod="kube-system/cilium-nptww" Feb 13 19:45:49.732477 kubelet[3394]: I0213 19:45:49.732451 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/15320574-c84b-487c-aaed-c139223db200-cilium-run\") pod \"cilium-nptww\" (UID: \"15320574-c84b-487c-aaed-c139223db200\") " pod="kube-system/cilium-nptww" Feb 13 19:45:49.732526 kubelet[3394]: I0213 19:45:49.732480 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15320574-c84b-487c-aaed-c139223db200-xtables-lock\") pod \"cilium-nptww\" (UID: \"15320574-c84b-487c-aaed-c139223db200\") " pod="kube-system/cilium-nptww" Feb 13 19:45:49.732575 kubelet[3394]: I0213 19:45:49.732521 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/15320574-c84b-487c-aaed-c139223db200-hostproc\") pod \"cilium-nptww\" (UID: \"15320574-c84b-487c-aaed-c139223db200\") " pod="kube-system/cilium-nptww" Feb 13 19:45:49.732575 kubelet[3394]: I0213 19:45:49.732551 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/15320574-c84b-487c-aaed-c139223db200-etc-cni-netd\") pod \"cilium-nptww\" (UID: \"15320574-c84b-487c-aaed-c139223db200\") " pod="kube-system/cilium-nptww" Feb 13 19:45:49.732671 kubelet[3394]: I0213 19:45:49.732574 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/15320574-c84b-487c-aaed-c139223db200-hubble-tls\") pod \"cilium-nptww\" (UID: \"15320574-c84b-487c-aaed-c139223db200\") " pod="kube-system/cilium-nptww" Feb 13 19:45:49.732671 kubelet[3394]: I0213 19:45:49.732618 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwmc2\" (UniqueName: \"kubernetes.io/projected/15320574-c84b-487c-aaed-c139223db200-kube-api-access-xwmc2\") pod \"cilium-nptww\" (UID: \"15320574-c84b-487c-aaed-c139223db200\") " pod="kube-system/cilium-nptww" Feb 13 19:45:49.732757 kubelet[3394]: I0213 19:45:49.732675 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/15320574-c84b-487c-aaed-c139223db200-cilium-cgroup\") pod \"cilium-nptww\" (UID: \"15320574-c84b-487c-aaed-c139223db200\") " pod="kube-system/cilium-nptww" Feb 13 19:45:49.732757 kubelet[3394]: I0213 19:45:49.732711 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/15320574-c84b-487c-aaed-c139223db200-clustermesh-secrets\") pod \"cilium-nptww\" (UID: \"15320574-c84b-487c-aaed-c139223db200\") " pod="kube-system/cilium-nptww" Feb 13 19:45:49.732878 kubelet[3394]: I0213 19:45:49.732763 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15320574-c84b-487c-aaed-c139223db200-lib-modules\") pod \"cilium-nptww\" (UID: \"15320574-c84b-487c-aaed-c139223db200\") " pod="kube-system/cilium-nptww" Feb 13 19:45:49.732878 kubelet[3394]: I0213 19:45:49.732818 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/15320574-c84b-487c-aaed-c139223db200-host-proc-sys-kernel\") pod \"cilium-nptww\" (UID: \"15320574-c84b-487c-aaed-c139223db200\") " pod="kube-system/cilium-nptww" Feb 13 19:45:49.896838 systemd[1]: Created slice kubepods-besteffort-pod5c0597d5_da91_4a60_828b_034d34aa1071.slice - libcontainer container kubepods-besteffort-pod5c0597d5_da91_4a60_828b_034d34aa1071.slice. Feb 13 19:45:49.936545 kubelet[3394]: I0213 19:45:49.936492 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c0597d5-da91-4a60-828b-034d34aa1071-cilium-config-path\") pod \"cilium-operator-5d85765b45-gk6b8\" (UID: \"5c0597d5-da91-4a60-828b-034d34aa1071\") " pod="kube-system/cilium-operator-5d85765b45-gk6b8" Feb 13 19:45:49.936850 kubelet[3394]: I0213 19:45:49.936565 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgm92\" (UniqueName: \"kubernetes.io/projected/5c0597d5-da91-4a60-828b-034d34aa1071-kube-api-access-vgm92\") pod \"cilium-operator-5d85765b45-gk6b8\" (UID: \"5c0597d5-da91-4a60-828b-034d34aa1071\") " pod="kube-system/cilium-operator-5d85765b45-gk6b8" Feb 13 19:45:50.490657 sudo[2214]: pam_unix(sudo:session): session closed for user root Feb 13 19:45:50.512790 sshd[2213]: Connection closed by 139.178.89.65 port 41594 Feb 13 19:45:50.515363 sshd-session[2211]: pam_unix(sshd:session): session closed for user core Feb 13 19:45:50.528492 systemd[1]: sshd@6-172.31.19.60:22-139.178.89.65:41594.service: Deactivated successfully. Feb 13 19:45:50.531838 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:45:50.532127 systemd[1]: session-7.scope: Consumed 5.299s CPU time, 138.8M memory peak, 0B memory swap peak. Feb 13 19:45:50.533292 systemd-logind[1873]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:45:50.540020 systemd-logind[1873]: Removed session 7. Feb 13 19:45:50.735558 kubelet[3394]: E0213 19:45:50.735413 3394 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 13 19:45:50.737143 kubelet[3394]: E0213 19:45:50.736986 3394 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/72cfed29-dada-4de9-b147-f6bc7c6856bd-kube-proxy podName:72cfed29-dada-4de9-b147-f6bc7c6856bd nodeName:}" failed. No retries permitted until 2025-02-13 19:45:51.236915752 +0000 UTC m=+5.129393379 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/72cfed29-dada-4de9-b147-f6bc7c6856bd-kube-proxy") pod "kube-proxy-kjh9n" (UID: "72cfed29-dada-4de9-b147-f6bc7c6856bd") : failed to sync configmap cache: timed out waiting for the condition Feb 13 19:45:50.835886 kubelet[3394]: E0213 19:45:50.835045 3394 secret.go:188] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Feb 13 19:45:50.835886 kubelet[3394]: E0213 19:45:50.835159 3394 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15320574-c84b-487c-aaed-c139223db200-clustermesh-secrets podName:15320574-c84b-487c-aaed-c139223db200 nodeName:}" failed. No retries permitted until 2025-02-13 19:45:51.335136679 +0000 UTC m=+5.227614308 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/15320574-c84b-487c-aaed-c139223db200-clustermesh-secrets") pod "cilium-nptww" (UID: "15320574-c84b-487c-aaed-c139223db200") : failed to sync secret cache: timed out waiting for the condition Feb 13 19:45:50.835886 kubelet[3394]: E0213 19:45:50.835071 3394 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 13 19:45:50.835886 kubelet[3394]: E0213 19:45:50.835458 3394 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/15320574-c84b-487c-aaed-c139223db200-cilium-config-path podName:15320574-c84b-487c-aaed-c139223db200 nodeName:}" failed. No retries permitted until 2025-02-13 19:45:51.335441523 +0000 UTC m=+5.227919139 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/15320574-c84b-487c-aaed-c139223db200-cilium-config-path") pod "cilium-nptww" (UID: "15320574-c84b-487c-aaed-c139223db200") : failed to sync configmap cache: timed out waiting for the condition Feb 13 19:45:51.039630 kubelet[3394]: E0213 19:45:51.039578 3394 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 13 19:45:51.039839 kubelet[3394]: E0213 19:45:51.039695 3394 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5c0597d5-da91-4a60-828b-034d34aa1071-cilium-config-path podName:5c0597d5-da91-4a60-828b-034d34aa1071 nodeName:}" failed. No retries permitted until 2025-02-13 19:45:51.539667374 +0000 UTC m=+5.432144990 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/5c0597d5-da91-4a60-828b-034d34aa1071-cilium-config-path") pod "cilium-operator-5d85765b45-gk6b8" (UID: "5c0597d5-da91-4a60-828b-034d34aa1071") : failed to sync configmap cache: timed out waiting for the condition Feb 13 19:45:51.438016 containerd[1881]: time="2025-02-13T19:45:51.437964891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kjh9n,Uid:72cfed29-dada-4de9-b147-f6bc7c6856bd,Namespace:kube-system,Attempt:0,}" Feb 13 19:45:51.469792 containerd[1881]: time="2025-02-13T19:45:51.469710156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nptww,Uid:15320574-c84b-487c-aaed-c139223db200,Namespace:kube-system,Attempt:0,}" Feb 13 19:45:51.520330 containerd[1881]: time="2025-02-13T19:45:51.520190607Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:45:51.520330 containerd[1881]: time="2025-02-13T19:45:51.520270949Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:45:51.520330 containerd[1881]: time="2025-02-13T19:45:51.520293474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:45:51.521740 containerd[1881]: time="2025-02-13T19:45:51.520408746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:45:51.574919 containerd[1881]: time="2025-02-13T19:45:51.574330054Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:45:51.574919 containerd[1881]: time="2025-02-13T19:45:51.574442138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:45:51.574919 containerd[1881]: time="2025-02-13T19:45:51.574470287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:45:51.574919 containerd[1881]: time="2025-02-13T19:45:51.574602378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:45:51.575034 systemd[1]: Started cri-containerd-3c7c69f3afadd449985169f52ce079243f543e0aabf1079679cc2027ef6e16b2.scope - libcontainer container 3c7c69f3afadd449985169f52ce079243f543e0aabf1079679cc2027ef6e16b2. Feb 13 19:45:51.654431 systemd[1]: Started cri-containerd-ff9052e6263d5772efd8c429ef3f36c7a4ab5aba8f11cbfdd5a97037796c2a3a.scope - libcontainer container ff9052e6263d5772efd8c429ef3f36c7a4ab5aba8f11cbfdd5a97037796c2a3a. Feb 13 19:45:51.712689 containerd[1881]: time="2025-02-13T19:45:51.712236269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-gk6b8,Uid:5c0597d5-da91-4a60-828b-034d34aa1071,Namespace:kube-system,Attempt:0,}" Feb 13 19:45:51.758640 containerd[1881]: time="2025-02-13T19:45:51.758370198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nptww,Uid:15320574-c84b-487c-aaed-c139223db200,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff9052e6263d5772efd8c429ef3f36c7a4ab5aba8f11cbfdd5a97037796c2a3a\"" Feb 13 19:45:51.765083 containerd[1881]: time="2025-02-13T19:45:51.764299819Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 19:45:51.777594 containerd[1881]: time="2025-02-13T19:45:51.777545883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kjh9n,Uid:72cfed29-dada-4de9-b147-f6bc7c6856bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c7c69f3afadd449985169f52ce079243f543e0aabf1079679cc2027ef6e16b2\"" Feb 13 19:45:51.786613 containerd[1881]: time="2025-02-13T19:45:51.785968919Z" level=info msg="CreateContainer within sandbox \"3c7c69f3afadd449985169f52ce079243f543e0aabf1079679cc2027ef6e16b2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:45:51.797979 containerd[1881]: time="2025-02-13T19:45:51.797554730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:45:51.797979 containerd[1881]: time="2025-02-13T19:45:51.797933154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:45:51.797979 containerd[1881]: time="2025-02-13T19:45:51.797949664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:45:51.798629 containerd[1881]: time="2025-02-13T19:45:51.798061245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:45:51.892173 containerd[1881]: time="2025-02-13T19:45:51.892010996Z" level=info msg="CreateContainer within sandbox \"3c7c69f3afadd449985169f52ce079243f543e0aabf1079679cc2027ef6e16b2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e9befefb3f859e1c84f29ceea5ca84db5b37b44079c780b903972ec6bbf5efd6\"" Feb 13 19:45:51.894822 containerd[1881]: time="2025-02-13T19:45:51.892838345Z" level=info msg="StartContainer for \"e9befefb3f859e1c84f29ceea5ca84db5b37b44079c780b903972ec6bbf5efd6\"" Feb 13 19:45:51.905096 systemd[1]: Started cri-containerd-4fc44529e8006271372add233fd6e6a17f88f3333e1be8786c1c5a3d8f96b099.scope - libcontainer container 4fc44529e8006271372add233fd6e6a17f88f3333e1be8786c1c5a3d8f96b099. Feb 13 19:45:51.939985 systemd[1]: Started cri-containerd-e9befefb3f859e1c84f29ceea5ca84db5b37b44079c780b903972ec6bbf5efd6.scope - libcontainer container e9befefb3f859e1c84f29ceea5ca84db5b37b44079c780b903972ec6bbf5efd6. Feb 13 19:45:51.998616 containerd[1881]: time="2025-02-13T19:45:51.998341507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-gk6b8,Uid:5c0597d5-da91-4a60-828b-034d34aa1071,Namespace:kube-system,Attempt:0,} returns sandbox id \"4fc44529e8006271372add233fd6e6a17f88f3333e1be8786c1c5a3d8f96b099\"" Feb 13 19:45:52.008470 containerd[1881]: time="2025-02-13T19:45:52.008329392Z" level=info msg="StartContainer for \"e9befefb3f859e1c84f29ceea5ca84db5b37b44079c780b903972ec6bbf5efd6\" returns successfully" Feb 13 19:45:52.858114 kubelet[3394]: I0213 19:45:52.855895 3394 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kjh9n" podStartSLOduration=3.855852858 podStartE2EDuration="3.855852858s" podCreationTimestamp="2025-02-13 19:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:45:52.606635871 +0000 UTC m=+6.499113506" watchObservedRunningTime="2025-02-13 19:45:52.855852858 +0000 UTC m=+6.748330493" Feb 13 19:45:58.658316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3884178171.mount: Deactivated successfully. Feb 13 19:46:04.104079 containerd[1881]: time="2025-02-13T19:46:04.104010642Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:46:04.105991 containerd[1881]: time="2025-02-13T19:46:04.105818378Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 13 19:46:04.109029 containerd[1881]: time="2025-02-13T19:46:04.108763132Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:46:04.114749 containerd[1881]: time="2025-02-13T19:46:04.114560210Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.350205701s" Feb 13 19:46:04.114749 containerd[1881]: time="2025-02-13T19:46:04.114615850Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 19:46:04.129340 containerd[1881]: time="2025-02-13T19:46:04.129263050Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 19:46:04.143396 containerd[1881]: time="2025-02-13T19:46:04.143219618Z" level=info msg="CreateContainer within sandbox \"ff9052e6263d5772efd8c429ef3f36c7a4ab5aba8f11cbfdd5a97037796c2a3a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:46:04.500361 containerd[1881]: time="2025-02-13T19:46:04.500304220Z" level=info msg="CreateContainer within sandbox \"ff9052e6263d5772efd8c429ef3f36c7a4ab5aba8f11cbfdd5a97037796c2a3a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ae8b59129c08c70a9bcbc5e0280a652931be5528d48881273824b082554f59b8\"" Feb 13 19:46:04.516411 containerd[1881]: time="2025-02-13T19:46:04.513656938Z" level=info msg="StartContainer for \"ae8b59129c08c70a9bcbc5e0280a652931be5528d48881273824b082554f59b8\"" Feb 13 19:46:04.975023 systemd[1]: Started cri-containerd-ae8b59129c08c70a9bcbc5e0280a652931be5528d48881273824b082554f59b8.scope - libcontainer container ae8b59129c08c70a9bcbc5e0280a652931be5528d48881273824b082554f59b8. Feb 13 19:46:05.091139 containerd[1881]: time="2025-02-13T19:46:05.090877104Z" level=info msg="StartContainer for \"ae8b59129c08c70a9bcbc5e0280a652931be5528d48881273824b082554f59b8\" returns successfully" Feb 13 19:46:05.154761 systemd[1]: cri-containerd-ae8b59129c08c70a9bcbc5e0280a652931be5528d48881273824b082554f59b8.scope: Deactivated successfully. Feb 13 19:46:05.503716 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae8b59129c08c70a9bcbc5e0280a652931be5528d48881273824b082554f59b8-rootfs.mount: Deactivated successfully. Feb 13 19:46:05.672329 containerd[1881]: time="2025-02-13T19:46:05.646632234Z" level=info msg="shim disconnected" id=ae8b59129c08c70a9bcbc5e0280a652931be5528d48881273824b082554f59b8 namespace=k8s.io Feb 13 19:46:05.673168 containerd[1881]: time="2025-02-13T19:46:05.672336908Z" level=warning msg="cleaning up after shim disconnected" id=ae8b59129c08c70a9bcbc5e0280a652931be5528d48881273824b082554f59b8 namespace=k8s.io Feb 13 19:46:05.673168 containerd[1881]: time="2025-02-13T19:46:05.672360418Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:46:05.703839 containerd[1881]: time="2025-02-13T19:46:05.702528262Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:46:05Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:46:05.856977 containerd[1881]: time="2025-02-13T19:46:05.851685421Z" level=info msg="CreateContainer within sandbox \"ff9052e6263d5772efd8c429ef3f36c7a4ab5aba8f11cbfdd5a97037796c2a3a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:46:05.933912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3559651438.mount: Deactivated successfully. Feb 13 19:46:05.968377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1723416062.mount: Deactivated successfully. Feb 13 19:46:05.974506 containerd[1881]: time="2025-02-13T19:46:05.974373321Z" level=info msg="CreateContainer within sandbox \"ff9052e6263d5772efd8c429ef3f36c7a4ab5aba8f11cbfdd5a97037796c2a3a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"66a6d8ab6a062e5237284922d84319ca210359a8f50b1c676fae87f3c1ae25cf\"" Feb 13 19:46:05.977475 containerd[1881]: time="2025-02-13T19:46:05.977318739Z" level=info msg="StartContainer for \"66a6d8ab6a062e5237284922d84319ca210359a8f50b1c676fae87f3c1ae25cf\"" Feb 13 19:46:06.051307 systemd[1]: Started cri-containerd-66a6d8ab6a062e5237284922d84319ca210359a8f50b1c676fae87f3c1ae25cf.scope - libcontainer container 66a6d8ab6a062e5237284922d84319ca210359a8f50b1c676fae87f3c1ae25cf. Feb 13 19:46:06.161260 containerd[1881]: time="2025-02-13T19:46:06.158876121Z" level=info msg="StartContainer for \"66a6d8ab6a062e5237284922d84319ca210359a8f50b1c676fae87f3c1ae25cf\" returns successfully" Feb 13 19:46:06.198434 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:46:06.199475 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:46:06.199580 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:46:06.214723 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:46:06.216291 systemd[1]: cri-containerd-66a6d8ab6a062e5237284922d84319ca210359a8f50b1c676fae87f3c1ae25cf.scope: Deactivated successfully. Feb 13 19:46:06.308921 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:46:06.328985 containerd[1881]: time="2025-02-13T19:46:06.328733536Z" level=info msg="shim disconnected" id=66a6d8ab6a062e5237284922d84319ca210359a8f50b1c676fae87f3c1ae25cf namespace=k8s.io Feb 13 19:46:06.328985 containerd[1881]: time="2025-02-13T19:46:06.328818035Z" level=warning msg="cleaning up after shim disconnected" id=66a6d8ab6a062e5237284922d84319ca210359a8f50b1c676fae87f3c1ae25cf namespace=k8s.io Feb 13 19:46:06.328985 containerd[1881]: time="2025-02-13T19:46:06.328830761Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:46:06.357716 containerd[1881]: time="2025-02-13T19:46:06.357659069Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:46:06Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:46:06.890563 containerd[1881]: time="2025-02-13T19:46:06.889211581Z" level=info msg="CreateContainer within sandbox \"ff9052e6263d5772efd8c429ef3f36c7a4ab5aba8f11cbfdd5a97037796c2a3a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:46:07.104250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount128757422.mount: Deactivated successfully. Feb 13 19:46:07.112621 containerd[1881]: time="2025-02-13T19:46:07.112566020Z" level=info msg="CreateContainer within sandbox \"ff9052e6263d5772efd8c429ef3f36c7a4ab5aba8f11cbfdd5a97037796c2a3a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9fca3fad9d9525527ee0eaeb6d7e1080f5d013d37c55ef83eb45a780d14ea5a7\"" Feb 13 19:46:07.112924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3230624922.mount: Deactivated successfully. Feb 13 19:46:07.117022 containerd[1881]: time="2025-02-13T19:46:07.113579174Z" level=info msg="StartContainer for \"9fca3fad9d9525527ee0eaeb6d7e1080f5d013d37c55ef83eb45a780d14ea5a7\"" Feb 13 19:46:07.248046 systemd[1]: Started cri-containerd-9fca3fad9d9525527ee0eaeb6d7e1080f5d013d37c55ef83eb45a780d14ea5a7.scope - libcontainer container 9fca3fad9d9525527ee0eaeb6d7e1080f5d013d37c55ef83eb45a780d14ea5a7. Feb 13 19:46:07.358087 containerd[1881]: time="2025-02-13T19:46:07.355433247Z" level=info msg="StartContainer for \"9fca3fad9d9525527ee0eaeb6d7e1080f5d013d37c55ef83eb45a780d14ea5a7\" returns successfully" Feb 13 19:46:07.357383 systemd[1]: cri-containerd-9fca3fad9d9525527ee0eaeb6d7e1080f5d013d37c55ef83eb45a780d14ea5a7.scope: Deactivated successfully. Feb 13 19:46:07.513647 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9fca3fad9d9525527ee0eaeb6d7e1080f5d013d37c55ef83eb45a780d14ea5a7-rootfs.mount: Deactivated successfully. Feb 13 19:46:07.562239 containerd[1881]: time="2025-02-13T19:46:07.561386996Z" level=info msg="shim disconnected" id=9fca3fad9d9525527ee0eaeb6d7e1080f5d013d37c55ef83eb45a780d14ea5a7 namespace=k8s.io Feb 13 19:46:07.562239 containerd[1881]: time="2025-02-13T19:46:07.561457501Z" level=warning msg="cleaning up after shim disconnected" id=9fca3fad9d9525527ee0eaeb6d7e1080f5d013d37c55ef83eb45a780d14ea5a7 namespace=k8s.io Feb 13 19:46:07.562239 containerd[1881]: time="2025-02-13T19:46:07.561469879Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:46:07.666664 containerd[1881]: time="2025-02-13T19:46:07.666607331Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:46:07.667750 containerd[1881]: time="2025-02-13T19:46:07.667619729Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 13 19:46:07.670792 containerd[1881]: time="2025-02-13T19:46:07.670461121Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:46:07.672597 containerd[1881]: time="2025-02-13T19:46:07.672547017Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.543220844s" Feb 13 19:46:07.672771 containerd[1881]: time="2025-02-13T19:46:07.672748902Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 19:46:07.677009 containerd[1881]: time="2025-02-13T19:46:07.676960667Z" level=info msg="CreateContainer within sandbox \"4fc44529e8006271372add233fd6e6a17f88f3333e1be8786c1c5a3d8f96b099\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 19:46:07.706160 containerd[1881]: time="2025-02-13T19:46:07.706107001Z" level=info msg="CreateContainer within sandbox \"4fc44529e8006271372add233fd6e6a17f88f3333e1be8786c1c5a3d8f96b099\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5193b7ddb379998eae05ab2aec28acbb5e9b289791b91c23ee09ae18b02e581a\"" Feb 13 19:46:07.708073 containerd[1881]: time="2025-02-13T19:46:07.706721079Z" level=info msg="StartContainer for \"5193b7ddb379998eae05ab2aec28acbb5e9b289791b91c23ee09ae18b02e581a\"" Feb 13 19:46:07.755095 systemd[1]: Started cri-containerd-5193b7ddb379998eae05ab2aec28acbb5e9b289791b91c23ee09ae18b02e581a.scope - libcontainer container 5193b7ddb379998eae05ab2aec28acbb5e9b289791b91c23ee09ae18b02e581a. Feb 13 19:46:07.801685 containerd[1881]: time="2025-02-13T19:46:07.801518836Z" level=info msg="StartContainer for \"5193b7ddb379998eae05ab2aec28acbb5e9b289791b91c23ee09ae18b02e581a\" returns successfully" Feb 13 19:46:07.883124 containerd[1881]: time="2025-02-13T19:46:07.882932387Z" level=info msg="CreateContainer within sandbox \"ff9052e6263d5772efd8c429ef3f36c7a4ab5aba8f11cbfdd5a97037796c2a3a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:46:07.896916 kubelet[3394]: I0213 19:46:07.896838 3394 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-gk6b8" podStartSLOduration=3.224122114 podStartE2EDuration="18.896812646s" podCreationTimestamp="2025-02-13 19:45:49 +0000 UTC" firstStartedPulling="2025-02-13 19:45:52.001187169 +0000 UTC m=+5.893664820" lastFinishedPulling="2025-02-13 19:46:07.67387773 +0000 UTC m=+21.566355352" observedRunningTime="2025-02-13 19:46:07.89646653 +0000 UTC m=+21.788944162" watchObservedRunningTime="2025-02-13 19:46:07.896812646 +0000 UTC m=+21.789290284" Feb 13 19:46:07.912287 containerd[1881]: time="2025-02-13T19:46:07.912129870Z" level=info msg="CreateContainer within sandbox \"ff9052e6263d5772efd8c429ef3f36c7a4ab5aba8f11cbfdd5a97037796c2a3a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c7146053f75993aa2dff078f910e66f300ffbc29e2aeff144ef534514a090693\"" Feb 13 19:46:07.915630 containerd[1881]: time="2025-02-13T19:46:07.914125268Z" level=info msg="StartContainer for \"c7146053f75993aa2dff078f910e66f300ffbc29e2aeff144ef534514a090693\"" Feb 13 19:46:07.984876 systemd[1]: Started cri-containerd-c7146053f75993aa2dff078f910e66f300ffbc29e2aeff144ef534514a090693.scope - libcontainer container c7146053f75993aa2dff078f910e66f300ffbc29e2aeff144ef534514a090693. Feb 13 19:46:08.071592 systemd[1]: cri-containerd-c7146053f75993aa2dff078f910e66f300ffbc29e2aeff144ef534514a090693.scope: Deactivated successfully. Feb 13 19:46:08.075011 containerd[1881]: time="2025-02-13T19:46:08.074961658Z" level=info msg="StartContainer for \"c7146053f75993aa2dff078f910e66f300ffbc29e2aeff144ef534514a090693\" returns successfully" Feb 13 19:46:08.177992 containerd[1881]: time="2025-02-13T19:46:08.177917846Z" level=info msg="shim disconnected" id=c7146053f75993aa2dff078f910e66f300ffbc29e2aeff144ef534514a090693 namespace=k8s.io Feb 13 19:46:08.178757 containerd[1881]: time="2025-02-13T19:46:08.178521906Z" level=warning msg="cleaning up after shim disconnected" id=c7146053f75993aa2dff078f910e66f300ffbc29e2aeff144ef534514a090693 namespace=k8s.io Feb 13 19:46:08.178757 containerd[1881]: time="2025-02-13T19:46:08.178548327Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:46:08.891698 containerd[1881]: time="2025-02-13T19:46:08.891329681Z" level=info msg="CreateContainer within sandbox \"ff9052e6263d5772efd8c429ef3f36c7a4ab5aba8f11cbfdd5a97037796c2a3a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:46:08.921820 containerd[1881]: time="2025-02-13T19:46:08.921571801Z" level=info msg="CreateContainer within sandbox \"ff9052e6263d5772efd8c429ef3f36c7a4ab5aba8f11cbfdd5a97037796c2a3a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"038b176447ef7aa8eeb8eddd88353740e0aefb5a2ca5abe17e87d337382f9a34\"" Feb 13 19:46:08.925836 containerd[1881]: time="2025-02-13T19:46:08.922806769Z" level=info msg="StartContainer for \"038b176447ef7aa8eeb8eddd88353740e0aefb5a2ca5abe17e87d337382f9a34\"" Feb 13 19:46:09.039864 systemd[1]: Started cri-containerd-038b176447ef7aa8eeb8eddd88353740e0aefb5a2ca5abe17e87d337382f9a34.scope - libcontainer container 038b176447ef7aa8eeb8eddd88353740e0aefb5a2ca5abe17e87d337382f9a34. Feb 13 19:46:09.106136 containerd[1881]: time="2025-02-13T19:46:09.106038677Z" level=info msg="StartContainer for \"038b176447ef7aa8eeb8eddd88353740e0aefb5a2ca5abe17e87d337382f9a34\" returns successfully" Feb 13 19:46:09.549169 kubelet[3394]: I0213 19:46:09.549124 3394 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 19:46:09.758423 systemd[1]: Created slice kubepods-burstable-pod2923fbc6_afdd_4eeb_bf86_4ab27cc3cd47.slice - libcontainer container kubepods-burstable-pod2923fbc6_afdd_4eeb_bf86_4ab27cc3cd47.slice. Feb 13 19:46:09.775288 systemd[1]: Created slice kubepods-burstable-pod8bc7b766_e1de_44b8_b0e0_5b5fcee49343.slice - libcontainer container kubepods-burstable-pod8bc7b766_e1de_44b8_b0e0_5b5fcee49343.slice. Feb 13 19:46:09.897483 kubelet[3394]: I0213 19:46:09.897354 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8bc7b766-e1de-44b8-b0e0-5b5fcee49343-config-volume\") pod \"coredns-6f6b679f8f-l42jp\" (UID: \"8bc7b766-e1de-44b8-b0e0-5b5fcee49343\") " pod="kube-system/coredns-6f6b679f8f-l42jp" Feb 13 19:46:09.897654 kubelet[3394]: I0213 19:46:09.897429 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-692bs\" (UniqueName: \"kubernetes.io/projected/2923fbc6-afdd-4eeb-bf86-4ab27cc3cd47-kube-api-access-692bs\") pod \"coredns-6f6b679f8f-kkf7d\" (UID: \"2923fbc6-afdd-4eeb-bf86-4ab27cc3cd47\") " pod="kube-system/coredns-6f6b679f8f-kkf7d" Feb 13 19:46:09.897654 kubelet[3394]: I0213 19:46:09.897529 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrxc4\" (UniqueName: \"kubernetes.io/projected/8bc7b766-e1de-44b8-b0e0-5b5fcee49343-kube-api-access-vrxc4\") pod \"coredns-6f6b679f8f-l42jp\" (UID: \"8bc7b766-e1de-44b8-b0e0-5b5fcee49343\") " pod="kube-system/coredns-6f6b679f8f-l42jp" Feb 13 19:46:09.897839 kubelet[3394]: I0213 19:46:09.897555 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2923fbc6-afdd-4eeb-bf86-4ab27cc3cd47-config-volume\") pod \"coredns-6f6b679f8f-kkf7d\" (UID: \"2923fbc6-afdd-4eeb-bf86-4ab27cc3cd47\") " pod="kube-system/coredns-6f6b679f8f-kkf7d" Feb 13 19:46:10.075200 containerd[1881]: time="2025-02-13T19:46:10.075144400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kkf7d,Uid:2923fbc6-afdd-4eeb-bf86-4ab27cc3cd47,Namespace:kube-system,Attempt:0,}" Feb 13 19:46:10.084413 containerd[1881]: time="2025-02-13T19:46:10.084090682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-l42jp,Uid:8bc7b766-e1de-44b8-b0e0-5b5fcee49343,Namespace:kube-system,Attempt:0,}" Feb 13 19:46:12.781789 systemd-networkd[1717]: cilium_host: Link UP Feb 13 19:46:12.787130 systemd-networkd[1717]: cilium_net: Link UP Feb 13 19:46:12.796770 systemd-networkd[1717]: cilium_net: Gained carrier Feb 13 19:46:12.797660 (udev-worker)[4223]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:46:12.804358 systemd-networkd[1717]: cilium_host: Gained carrier Feb 13 19:46:12.804722 systemd-networkd[1717]: cilium_net: Gained IPv6LL Feb 13 19:46:12.807938 (udev-worker)[4190]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:46:13.007861 systemd-networkd[1717]: cilium_vxlan: Link UP Feb 13 19:46:13.007874 systemd-networkd[1717]: cilium_vxlan: Gained carrier Feb 13 19:46:13.518947 systemd-networkd[1717]: cilium_host: Gained IPv6LL Feb 13 19:46:13.956031 kernel: NET: Registered PF_ALG protocol family Feb 13 19:46:14.223058 systemd-networkd[1717]: cilium_vxlan: Gained IPv6LL Feb 13 19:46:15.453328 (udev-worker)[4238]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:46:15.489267 systemd-networkd[1717]: lxc_health: Link UP Feb 13 19:46:15.505069 systemd-networkd[1717]: lxc_health: Gained carrier Feb 13 19:46:15.527809 kubelet[3394]: I0213 19:46:15.526500 3394 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nptww" podStartSLOduration=14.163374491 podStartE2EDuration="26.526469992s" podCreationTimestamp="2025-02-13 19:45:49 +0000 UTC" firstStartedPulling="2025-02-13 19:45:51.761037578 +0000 UTC m=+5.653515193" lastFinishedPulling="2025-02-13 19:46:04.124133057 +0000 UTC m=+18.016610694" observedRunningTime="2025-02-13 19:46:09.937701149 +0000 UTC m=+23.830178805" watchObservedRunningTime="2025-02-13 19:46:15.526469992 +0000 UTC m=+29.418947628" Feb 13 19:46:16.331173 systemd-networkd[1717]: lxc6a492c32ecc0: Link UP Feb 13 19:46:16.331516 systemd-networkd[1717]: lxcfea3ff42dccb: Link UP Feb 13 19:46:16.345811 kernel: eth0: renamed from tmp46661 Feb 13 19:46:16.352926 kernel: eth0: renamed from tmp679cc Feb 13 19:46:16.360076 systemd-networkd[1717]: lxc6a492c32ecc0: Gained carrier Feb 13 19:46:16.362536 systemd-networkd[1717]: lxcfea3ff42dccb: Gained carrier Feb 13 19:46:16.847058 systemd-networkd[1717]: lxc_health: Gained IPv6LL Feb 13 19:46:17.682440 systemd-networkd[1717]: lxcfea3ff42dccb: Gained IPv6LL Feb 13 19:46:18.129611 systemd-networkd[1717]: lxc6a492c32ecc0: Gained IPv6LL Feb 13 19:46:20.914218 ntpd[1864]: Listen normally on 7 cilium_host 192.168.0.111:123 Feb 13 19:46:20.915645 ntpd[1864]: 13 Feb 19:46:20 ntpd[1864]: Listen normally on 7 cilium_host 192.168.0.111:123 Feb 13 19:46:20.915645 ntpd[1864]: 13 Feb 19:46:20 ntpd[1864]: Listen normally on 8 cilium_net [fe80::549b:b8ff:fe0c:755%4]:123 Feb 13 19:46:20.915645 ntpd[1864]: 13 Feb 19:46:20 ntpd[1864]: Listen normally on 9 cilium_host [fe80::2076:42ff:feb1:4129%5]:123 Feb 13 19:46:20.915645 ntpd[1864]: 13 Feb 19:46:20 ntpd[1864]: Listen normally on 10 cilium_vxlan [fe80::5ca0:5fff:fe65:f082%6]:123 Feb 13 19:46:20.915645 ntpd[1864]: 13 Feb 19:46:20 ntpd[1864]: Listen normally on 11 lxc_health [fe80::bc13:bdff:fe53:acc4%8]:123 Feb 13 19:46:20.915645 ntpd[1864]: 13 Feb 19:46:20 ntpd[1864]: Listen normally on 12 lxc6a492c32ecc0 [fe80::84f5:8eff:feb1:d3ce%10]:123 Feb 13 19:46:20.915645 ntpd[1864]: 13 Feb 19:46:20 ntpd[1864]: Listen normally on 13 lxcfea3ff42dccb [fe80::809f:d6ff:fe21:4e69%12]:123 Feb 13 19:46:20.914325 ntpd[1864]: Listen normally on 8 cilium_net [fe80::549b:b8ff:fe0c:755%4]:123 Feb 13 19:46:20.914389 ntpd[1864]: Listen normally on 9 cilium_host [fe80::2076:42ff:feb1:4129%5]:123 Feb 13 19:46:20.914434 ntpd[1864]: Listen normally on 10 cilium_vxlan [fe80::5ca0:5fff:fe65:f082%6]:123 Feb 13 19:46:20.914474 ntpd[1864]: Listen normally on 11 lxc_health [fe80::bc13:bdff:fe53:acc4%8]:123 Feb 13 19:46:20.914513 ntpd[1864]: Listen normally on 12 lxc6a492c32ecc0 [fe80::84f5:8eff:feb1:d3ce%10]:123 Feb 13 19:46:20.914552 ntpd[1864]: Listen normally on 13 lxcfea3ff42dccb [fe80::809f:d6ff:fe21:4e69%12]:123 Feb 13 19:46:25.389766 containerd[1881]: time="2025-02-13T19:46:25.389233165Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:46:25.389766 containerd[1881]: time="2025-02-13T19:46:25.389396054Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:46:25.389766 containerd[1881]: time="2025-02-13T19:46:25.389423927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:46:25.389766 containerd[1881]: time="2025-02-13T19:46:25.389552915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:46:25.458312 systemd[1]: Started cri-containerd-679cccb41c06184764df71044478bf8b9006280c596352e1f88bb4eafb8f7790.scope - libcontainer container 679cccb41c06184764df71044478bf8b9006280c596352e1f88bb4eafb8f7790. Feb 13 19:46:25.495423 containerd[1881]: time="2025-02-13T19:46:25.494758624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:46:25.495423 containerd[1881]: time="2025-02-13T19:46:25.495152415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:46:25.496635 containerd[1881]: time="2025-02-13T19:46:25.496558276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:46:25.503437 containerd[1881]: time="2025-02-13T19:46:25.500082734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:46:25.549736 systemd[1]: run-containerd-runc-k8s.io-466610efcf88d47c5efccde2bca36fb4b6aa8d450a9a4d28de9f7b2fc6f59e76-runc.OopeRg.mount: Deactivated successfully. Feb 13 19:46:25.561081 systemd[1]: Started cri-containerd-466610efcf88d47c5efccde2bca36fb4b6aa8d450a9a4d28de9f7b2fc6f59e76.scope - libcontainer container 466610efcf88d47c5efccde2bca36fb4b6aa8d450a9a4d28de9f7b2fc6f59e76. Feb 13 19:46:25.657400 containerd[1881]: time="2025-02-13T19:46:25.656566003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-l42jp,Uid:8bc7b766-e1de-44b8-b0e0-5b5fcee49343,Namespace:kube-system,Attempt:0,} returns sandbox id \"679cccb41c06184764df71044478bf8b9006280c596352e1f88bb4eafb8f7790\"" Feb 13 19:46:25.663383 containerd[1881]: time="2025-02-13T19:46:25.662472110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kkf7d,Uid:2923fbc6-afdd-4eeb-bf86-4ab27cc3cd47,Namespace:kube-system,Attempt:0,} returns sandbox id \"466610efcf88d47c5efccde2bca36fb4b6aa8d450a9a4d28de9f7b2fc6f59e76\"" Feb 13 19:46:25.668540 containerd[1881]: time="2025-02-13T19:46:25.668493213Z" level=info msg="CreateContainer within sandbox \"679cccb41c06184764df71044478bf8b9006280c596352e1f88bb4eafb8f7790\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:46:25.672063 containerd[1881]: time="2025-02-13T19:46:25.671134320Z" level=info msg="CreateContainer within sandbox \"466610efcf88d47c5efccde2bca36fb4b6aa8d450a9a4d28de9f7b2fc6f59e76\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:46:25.710467 containerd[1881]: time="2025-02-13T19:46:25.710405497Z" level=info msg="CreateContainer within sandbox \"679cccb41c06184764df71044478bf8b9006280c596352e1f88bb4eafb8f7790\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0957c3c4536fce2935e07f1ca0594816599f29dbdac0296a640d4a6604b7f97c\"" Feb 13 19:46:25.717658 containerd[1881]: time="2025-02-13T19:46:25.713498729Z" level=info msg="StartContainer for \"0957c3c4536fce2935e07f1ca0594816599f29dbdac0296a640d4a6604b7f97c\"" Feb 13 19:46:25.717658 containerd[1881]: time="2025-02-13T19:46:25.716224223Z" level=info msg="CreateContainer within sandbox \"466610efcf88d47c5efccde2bca36fb4b6aa8d450a9a4d28de9f7b2fc6f59e76\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c0591f4fb99b456c8d93b7ed934ca176b051d48733a02d09b3cce2322c00d872\"" Feb 13 19:46:25.720173 containerd[1881]: time="2025-02-13T19:46:25.720115640Z" level=info msg="StartContainer for \"c0591f4fb99b456c8d93b7ed934ca176b051d48733a02d09b3cce2322c00d872\"" Feb 13 19:46:25.862129 systemd[1]: Started cri-containerd-0957c3c4536fce2935e07f1ca0594816599f29dbdac0296a640d4a6604b7f97c.scope - libcontainer container 0957c3c4536fce2935e07f1ca0594816599f29dbdac0296a640d4a6604b7f97c. Feb 13 19:46:25.874100 systemd[1]: Started cri-containerd-c0591f4fb99b456c8d93b7ed934ca176b051d48733a02d09b3cce2322c00d872.scope - libcontainer container c0591f4fb99b456c8d93b7ed934ca176b051d48733a02d09b3cce2322c00d872. Feb 13 19:46:26.087577 containerd[1881]: time="2025-02-13T19:46:26.087524415Z" level=info msg="StartContainer for \"c0591f4fb99b456c8d93b7ed934ca176b051d48733a02d09b3cce2322c00d872\" returns successfully" Feb 13 19:46:26.098014 containerd[1881]: time="2025-02-13T19:46:26.093755093Z" level=info msg="StartContainer for \"0957c3c4536fce2935e07f1ca0594816599f29dbdac0296a640d4a6604b7f97c\" returns successfully" Feb 13 19:46:27.129878 kubelet[3394]: I0213 19:46:27.128562 3394 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-l42jp" podStartSLOduration=38.128467177 podStartE2EDuration="38.128467177s" podCreationTimestamp="2025-02-13 19:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:46:27.127901393 +0000 UTC m=+41.020379028" watchObservedRunningTime="2025-02-13 19:46:27.128467177 +0000 UTC m=+41.020944816" Feb 13 19:46:27.161681 kubelet[3394]: I0213 19:46:27.160504 3394 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-kkf7d" podStartSLOduration=38.160482201 podStartE2EDuration="38.160482201s" podCreationTimestamp="2025-02-13 19:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:46:27.156495873 +0000 UTC m=+41.048973508" watchObservedRunningTime="2025-02-13 19:46:27.160482201 +0000 UTC m=+41.052959841" Feb 13 19:46:29.450287 systemd[1]: Started sshd@7-172.31.19.60:22-139.178.89.65:44030.service - OpenSSH per-connection server daemon (139.178.89.65:44030). Feb 13 19:46:29.714569 sshd[4767]: Accepted publickey for core from 139.178.89.65 port 44030 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:46:29.728012 sshd-session[4767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:46:29.739651 systemd-logind[1873]: New session 8 of user core. Feb 13 19:46:29.746297 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:46:31.014542 sshd[4769]: Connection closed by 139.178.89.65 port 44030 Feb 13 19:46:31.016057 sshd-session[4767]: pam_unix(sshd:session): session closed for user core Feb 13 19:46:31.021376 systemd-logind[1873]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:46:31.023082 systemd[1]: sshd@7-172.31.19.60:22-139.178.89.65:44030.service: Deactivated successfully. Feb 13 19:46:31.027202 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:46:31.028713 systemd-logind[1873]: Removed session 8. Feb 13 19:46:36.058300 systemd[1]: Started sshd@8-172.31.19.60:22-139.178.89.65:59914.service - OpenSSH per-connection server daemon (139.178.89.65:59914). Feb 13 19:46:36.298591 sshd[4782]: Accepted publickey for core from 139.178.89.65 port 59914 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:46:36.299405 sshd-session[4782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:46:36.321981 systemd-logind[1873]: New session 9 of user core. Feb 13 19:46:36.330124 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:46:36.615595 sshd[4784]: Connection closed by 139.178.89.65 port 59914 Feb 13 19:46:36.617230 sshd-session[4782]: pam_unix(sshd:session): session closed for user core Feb 13 19:46:36.621909 systemd[1]: sshd@8-172.31.19.60:22-139.178.89.65:59914.service: Deactivated successfully. Feb 13 19:46:36.626627 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:46:36.629176 systemd-logind[1873]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:46:36.631211 systemd-logind[1873]: Removed session 9. Feb 13 19:46:41.656346 systemd[1]: Started sshd@9-172.31.19.60:22-139.178.89.65:59928.service - OpenSSH per-connection server daemon (139.178.89.65:59928). Feb 13 19:46:41.851221 sshd[4797]: Accepted publickey for core from 139.178.89.65 port 59928 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:46:41.852163 sshd-session[4797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:46:41.863086 systemd-logind[1873]: New session 10 of user core. Feb 13 19:46:41.874674 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:46:42.379401 sshd[4799]: Connection closed by 139.178.89.65 port 59928 Feb 13 19:46:42.381326 sshd-session[4797]: pam_unix(sshd:session): session closed for user core Feb 13 19:46:42.387014 systemd[1]: sshd@9-172.31.19.60:22-139.178.89.65:59928.service: Deactivated successfully. Feb 13 19:46:42.390030 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:46:42.391662 systemd-logind[1873]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:46:42.393205 systemd-logind[1873]: Removed session 10. Feb 13 19:46:47.420495 systemd[1]: Started sshd@10-172.31.19.60:22-139.178.89.65:54934.service - OpenSSH per-connection server daemon (139.178.89.65:54934). Feb 13 19:46:47.629629 sshd[4813]: Accepted publickey for core from 139.178.89.65 port 54934 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:46:47.630616 sshd-session[4813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:46:47.643604 systemd-logind[1873]: New session 11 of user core. Feb 13 19:46:47.654104 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:46:48.055222 sshd[4815]: Connection closed by 139.178.89.65 port 54934 Feb 13 19:46:48.044762 sshd-session[4813]: pam_unix(sshd:session): session closed for user core Feb 13 19:46:48.115722 systemd[1]: sshd@10-172.31.19.60:22-139.178.89.65:54934.service: Deactivated successfully. Feb 13 19:46:48.125958 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:46:48.129412 systemd-logind[1873]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:46:48.138882 systemd[1]: Started sshd@11-172.31.19.60:22-139.178.89.65:54948.service - OpenSSH per-connection server daemon (139.178.89.65:54948). Feb 13 19:46:48.141388 systemd-logind[1873]: Removed session 11. Feb 13 19:46:48.336540 sshd[4827]: Accepted publickey for core from 139.178.89.65 port 54948 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:46:48.338333 sshd-session[4827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:46:48.360389 systemd-logind[1873]: New session 12 of user core. Feb 13 19:46:48.369949 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:46:48.848671 sshd[4829]: Connection closed by 139.178.89.65 port 54948 Feb 13 19:46:48.851348 sshd-session[4827]: pam_unix(sshd:session): session closed for user core Feb 13 19:46:48.869214 systemd[1]: sshd@11-172.31.19.60:22-139.178.89.65:54948.service: Deactivated successfully. Feb 13 19:46:48.876683 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:46:48.883011 systemd-logind[1873]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:46:48.908092 systemd[1]: Started sshd@12-172.31.19.60:22-139.178.89.65:54954.service - OpenSSH per-connection server daemon (139.178.89.65:54954). Feb 13 19:46:48.911832 systemd-logind[1873]: Removed session 12. Feb 13 19:46:49.141147 sshd[4839]: Accepted publickey for core from 139.178.89.65 port 54954 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:46:49.143962 sshd-session[4839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:46:49.165205 systemd-logind[1873]: New session 13 of user core. Feb 13 19:46:49.171152 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:46:49.468488 sshd[4841]: Connection closed by 139.178.89.65 port 54954 Feb 13 19:46:49.470426 sshd-session[4839]: pam_unix(sshd:session): session closed for user core Feb 13 19:46:49.475697 systemd[1]: sshd@12-172.31.19.60:22-139.178.89.65:54954.service: Deactivated successfully. Feb 13 19:46:49.478418 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:46:49.480426 systemd-logind[1873]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:46:49.484719 systemd-logind[1873]: Removed session 13. Feb 13 19:46:54.538753 systemd[1]: Started sshd@13-172.31.19.60:22-139.178.89.65:40424.service - OpenSSH per-connection server daemon (139.178.89.65:40424). Feb 13 19:46:54.722446 sshd[4855]: Accepted publickey for core from 139.178.89.65 port 40424 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:46:54.726136 sshd-session[4855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:46:54.744475 systemd-logind[1873]: New session 14 of user core. Feb 13 19:46:54.755648 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:46:55.039852 sshd[4857]: Connection closed by 139.178.89.65 port 40424 Feb 13 19:46:55.042950 sshd-session[4855]: pam_unix(sshd:session): session closed for user core Feb 13 19:46:55.047258 systemd[1]: sshd@13-172.31.19.60:22-139.178.89.65:40424.service: Deactivated successfully. Feb 13 19:46:55.050148 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:46:55.051172 systemd-logind[1873]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:46:55.052617 systemd-logind[1873]: Removed session 14. Feb 13 19:47:00.123713 systemd[1]: Started sshd@14-172.31.19.60:22-139.178.89.65:40440.service - OpenSSH per-connection server daemon (139.178.89.65:40440). Feb 13 19:47:00.374969 sshd[4868]: Accepted publickey for core from 139.178.89.65 port 40440 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:47:00.377789 sshd-session[4868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:47:00.398213 systemd-logind[1873]: New session 15 of user core. Feb 13 19:47:00.409097 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:47:00.822369 sshd[4870]: Connection closed by 139.178.89.65 port 40440 Feb 13 19:47:00.823679 sshd-session[4868]: pam_unix(sshd:session): session closed for user core Feb 13 19:47:00.831801 systemd-logind[1873]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:47:00.835176 systemd[1]: sshd@14-172.31.19.60:22-139.178.89.65:40440.service: Deactivated successfully. Feb 13 19:47:00.838711 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:47:00.841237 systemd-logind[1873]: Removed session 15. Feb 13 19:47:05.862650 systemd[1]: Started sshd@15-172.31.19.60:22-139.178.89.65:48418.service - OpenSSH per-connection server daemon (139.178.89.65:48418). Feb 13 19:47:06.095826 sshd[4883]: Accepted publickey for core from 139.178.89.65 port 48418 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:47:06.098010 sshd-session[4883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:47:06.105898 systemd-logind[1873]: New session 16 of user core. Feb 13 19:47:06.125281 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:47:06.533955 sshd[4885]: Connection closed by 139.178.89.65 port 48418 Feb 13 19:47:06.536724 sshd-session[4883]: pam_unix(sshd:session): session closed for user core Feb 13 19:47:06.548588 systemd[1]: sshd@15-172.31.19.60:22-139.178.89.65:48418.service: Deactivated successfully. Feb 13 19:47:06.552072 systemd-logind[1873]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:47:06.556072 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:47:06.580670 systemd[1]: Started sshd@16-172.31.19.60:22-139.178.89.65:48432.service - OpenSSH per-connection server daemon (139.178.89.65:48432). Feb 13 19:47:06.583274 systemd-logind[1873]: Removed session 16. Feb 13 19:47:06.760823 sshd[4896]: Accepted publickey for core from 139.178.89.65 port 48432 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:47:06.763283 sshd-session[4896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:47:06.787916 systemd-logind[1873]: New session 17 of user core. Feb 13 19:47:06.797297 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:47:07.578677 sshd[4898]: Connection closed by 139.178.89.65 port 48432 Feb 13 19:47:07.580393 sshd-session[4896]: pam_unix(sshd:session): session closed for user core Feb 13 19:47:07.587796 systemd[1]: sshd@16-172.31.19.60:22-139.178.89.65:48432.service: Deactivated successfully. Feb 13 19:47:07.594345 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:47:07.607571 systemd-logind[1873]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:47:07.636221 systemd[1]: Started sshd@17-172.31.19.60:22-139.178.89.65:48442.service - OpenSSH per-connection server daemon (139.178.89.65:48442). Feb 13 19:47:07.637397 systemd-logind[1873]: Removed session 17. Feb 13 19:47:07.905207 sshd[4906]: Accepted publickey for core from 139.178.89.65 port 48442 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:47:07.907091 sshd-session[4906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:47:07.915296 systemd-logind[1873]: New session 18 of user core. Feb 13 19:47:07.922023 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:47:10.887684 sshd[4908]: Connection closed by 139.178.89.65 port 48442 Feb 13 19:47:10.890812 sshd-session[4906]: pam_unix(sshd:session): session closed for user core Feb 13 19:47:10.930252 systemd[1]: sshd@17-172.31.19.60:22-139.178.89.65:48442.service: Deactivated successfully. Feb 13 19:47:10.939535 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:47:10.948185 systemd-logind[1873]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:47:10.957397 systemd[1]: Started sshd@18-172.31.19.60:22-139.178.89.65:48448.service - OpenSSH per-connection server daemon (139.178.89.65:48448). Feb 13 19:47:10.964944 systemd-logind[1873]: Removed session 18. Feb 13 19:47:11.161546 sshd[4924]: Accepted publickey for core from 139.178.89.65 port 48448 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:47:11.164090 sshd-session[4924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:47:11.176817 systemd-logind[1873]: New session 19 of user core. Feb 13 19:47:11.187211 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:47:11.828380 sshd[4926]: Connection closed by 139.178.89.65 port 48448 Feb 13 19:47:11.830113 sshd-session[4924]: pam_unix(sshd:session): session closed for user core Feb 13 19:47:11.836942 systemd[1]: sshd@18-172.31.19.60:22-139.178.89.65:48448.service: Deactivated successfully. Feb 13 19:47:11.840250 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:47:11.847390 systemd-logind[1873]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:47:11.864340 systemd[1]: Started sshd@19-172.31.19.60:22-139.178.89.65:48460.service - OpenSSH per-connection server daemon (139.178.89.65:48460). Feb 13 19:47:11.866631 systemd-logind[1873]: Removed session 19. Feb 13 19:47:12.058620 sshd[4935]: Accepted publickey for core from 139.178.89.65 port 48460 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:47:12.060588 sshd-session[4935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:47:12.075559 systemd-logind[1873]: New session 20 of user core. Feb 13 19:47:12.085287 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:47:12.357577 sshd[4937]: Connection closed by 139.178.89.65 port 48460 Feb 13 19:47:12.358127 sshd-session[4935]: pam_unix(sshd:session): session closed for user core Feb 13 19:47:12.366411 systemd[1]: sshd@19-172.31.19.60:22-139.178.89.65:48460.service: Deactivated successfully. Feb 13 19:47:12.370394 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:47:12.371769 systemd-logind[1873]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:47:12.373560 systemd-logind[1873]: Removed session 20. Feb 13 19:47:17.405042 systemd[1]: Started sshd@20-172.31.19.60:22-139.178.89.65:36694.service - OpenSSH per-connection server daemon (139.178.89.65:36694). Feb 13 19:47:17.591949 sshd[4948]: Accepted publickey for core from 139.178.89.65 port 36694 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:47:17.596022 sshd-session[4948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:47:17.605239 systemd-logind[1873]: New session 21 of user core. Feb 13 19:47:17.607987 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:47:17.839051 sshd[4950]: Connection closed by 139.178.89.65 port 36694 Feb 13 19:47:17.841142 sshd-session[4948]: pam_unix(sshd:session): session closed for user core Feb 13 19:47:17.849640 systemd[1]: sshd@20-172.31.19.60:22-139.178.89.65:36694.service: Deactivated successfully. Feb 13 19:47:17.854606 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:47:17.858179 systemd-logind[1873]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:47:17.862765 systemd-logind[1873]: Removed session 21. Feb 13 19:47:22.882971 systemd[1]: Started sshd@21-172.31.19.60:22-139.178.89.65:36698.service - OpenSSH per-connection server daemon (139.178.89.65:36698). Feb 13 19:47:23.089819 sshd[4966]: Accepted publickey for core from 139.178.89.65 port 36698 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:47:23.091203 sshd-session[4966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:47:23.097212 systemd-logind[1873]: New session 22 of user core. Feb 13 19:47:23.102002 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:47:23.417756 sshd[4968]: Connection closed by 139.178.89.65 port 36698 Feb 13 19:47:23.419082 sshd-session[4966]: pam_unix(sshd:session): session closed for user core Feb 13 19:47:23.449993 systemd[1]: sshd@21-172.31.19.60:22-139.178.89.65:36698.service: Deactivated successfully. Feb 13 19:47:23.456367 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:47:23.460722 systemd-logind[1873]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:47:23.462313 systemd-logind[1873]: Removed session 22. Feb 13 19:47:28.462957 systemd[1]: Started sshd@22-172.31.19.60:22-139.178.89.65:60346.service - OpenSSH per-connection server daemon (139.178.89.65:60346). Feb 13 19:47:28.670427 sshd[4979]: Accepted publickey for core from 139.178.89.65 port 60346 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:47:28.674251 sshd-session[4979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:47:28.683081 systemd-logind[1873]: New session 23 of user core. Feb 13 19:47:28.687004 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:47:28.906031 sshd[4981]: Connection closed by 139.178.89.65 port 60346 Feb 13 19:47:28.907996 sshd-session[4979]: pam_unix(sshd:session): session closed for user core Feb 13 19:47:28.912429 systemd[1]: sshd@22-172.31.19.60:22-139.178.89.65:60346.service: Deactivated successfully. Feb 13 19:47:28.915611 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:47:28.916902 systemd-logind[1873]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:47:28.919100 systemd-logind[1873]: Removed session 23. Feb 13 19:47:33.963657 systemd[1]: Started sshd@23-172.31.19.60:22-139.178.89.65:60348.service - OpenSSH per-connection server daemon (139.178.89.65:60348). Feb 13 19:47:34.154082 sshd[4992]: Accepted publickey for core from 139.178.89.65 port 60348 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:47:34.155899 sshd-session[4992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:47:34.167692 systemd-logind[1873]: New session 24 of user core. Feb 13 19:47:34.178080 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:47:34.496226 sshd[4994]: Connection closed by 139.178.89.65 port 60348 Feb 13 19:47:34.500073 sshd-session[4992]: pam_unix(sshd:session): session closed for user core Feb 13 19:47:34.504952 systemd-logind[1873]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:47:34.505682 systemd[1]: sshd@23-172.31.19.60:22-139.178.89.65:60348.service: Deactivated successfully. Feb 13 19:47:34.510531 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:47:34.513101 systemd-logind[1873]: Removed session 24. Feb 13 19:47:34.534401 systemd[1]: Started sshd@24-172.31.19.60:22-139.178.89.65:52220.service - OpenSSH per-connection server daemon (139.178.89.65:52220). Feb 13 19:47:34.720222 sshd[5005]: Accepted publickey for core from 139.178.89.65 port 52220 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:47:34.722247 sshd-session[5005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:47:34.734682 systemd-logind[1873]: New session 25 of user core. Feb 13 19:47:34.744110 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:47:36.894771 containerd[1881]: time="2025-02-13T19:47:36.894712100Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:47:36.903079 containerd[1881]: time="2025-02-13T19:47:36.900114677Z" level=info msg="StopContainer for \"5193b7ddb379998eae05ab2aec28acbb5e9b289791b91c23ee09ae18b02e581a\" with timeout 30 (s)" Feb 13 19:47:36.904198 containerd[1881]: time="2025-02-13T19:47:36.903956068Z" level=info msg="Stop container \"5193b7ddb379998eae05ab2aec28acbb5e9b289791b91c23ee09ae18b02e581a\" with signal terminated" Feb 13 19:47:36.904198 containerd[1881]: time="2025-02-13T19:47:36.904046972Z" level=info msg="StopContainer for \"038b176447ef7aa8eeb8eddd88353740e0aefb5a2ca5abe17e87d337382f9a34\" with timeout 2 (s)" Feb 13 19:47:36.905235 containerd[1881]: time="2025-02-13T19:47:36.905203989Z" level=info msg="Stop container \"038b176447ef7aa8eeb8eddd88353740e0aefb5a2ca5abe17e87d337382f9a34\" with signal terminated" Feb 13 19:47:36.930695 systemd-networkd[1717]: lxc_health: Link DOWN Feb 13 19:47:36.930708 systemd-networkd[1717]: lxc_health: Lost carrier Feb 13 19:47:36.958057 systemd[1]: cri-containerd-5193b7ddb379998eae05ab2aec28acbb5e9b289791b91c23ee09ae18b02e581a.scope: Deactivated successfully. Feb 13 19:47:36.980642 systemd[1]: cri-containerd-038b176447ef7aa8eeb8eddd88353740e0aefb5a2ca5abe17e87d337382f9a34.scope: Deactivated successfully. Feb 13 19:47:36.980956 systemd[1]: cri-containerd-038b176447ef7aa8eeb8eddd88353740e0aefb5a2ca5abe17e87d337382f9a34.scope: Consumed 9.667s CPU time. Feb 13 19:47:37.085882 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5193b7ddb379998eae05ab2aec28acbb5e9b289791b91c23ee09ae18b02e581a-rootfs.mount: Deactivated successfully. Feb 13 19:47:37.104377 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-038b176447ef7aa8eeb8eddd88353740e0aefb5a2ca5abe17e87d337382f9a34-rootfs.mount: Deactivated successfully. Feb 13 19:47:37.114305 containerd[1881]: time="2025-02-13T19:47:37.114222716Z" level=info msg="shim disconnected" id=038b176447ef7aa8eeb8eddd88353740e0aefb5a2ca5abe17e87d337382f9a34 namespace=k8s.io Feb 13 19:47:37.114305 containerd[1881]: time="2025-02-13T19:47:37.114300473Z" level=warning msg="cleaning up after shim disconnected" id=038b176447ef7aa8eeb8eddd88353740e0aefb5a2ca5abe17e87d337382f9a34 namespace=k8s.io Feb 13 19:47:37.114305 containerd[1881]: time="2025-02-13T19:47:37.114313025Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:47:37.114949 containerd[1881]: time="2025-02-13T19:47:37.114869872Z" level=info msg="shim disconnected" id=5193b7ddb379998eae05ab2aec28acbb5e9b289791b91c23ee09ae18b02e581a namespace=k8s.io Feb 13 19:47:37.114949 containerd[1881]: time="2025-02-13T19:47:37.114920574Z" level=warning msg="cleaning up after shim disconnected" id=5193b7ddb379998eae05ab2aec28acbb5e9b289791b91c23ee09ae18b02e581a namespace=k8s.io Feb 13 19:47:37.114949 containerd[1881]: time="2025-02-13T19:47:37.114933758Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:47:37.162683 containerd[1881]: time="2025-02-13T19:47:37.162435739Z" level=info msg="StopContainer for \"038b176447ef7aa8eeb8eddd88353740e0aefb5a2ca5abe17e87d337382f9a34\" returns successfully" Feb 13 19:47:37.163374 containerd[1881]: time="2025-02-13T19:47:37.163266537Z" level=info msg="StopContainer for \"5193b7ddb379998eae05ab2aec28acbb5e9b289791b91c23ee09ae18b02e581a\" returns successfully" Feb 13 19:47:37.165316 containerd[1881]: time="2025-02-13T19:47:37.165002177Z" level=info msg="StopPodSandbox for \"ff9052e6263d5772efd8c429ef3f36c7a4ab5aba8f11cbfdd5a97037796c2a3a\"" Feb 13 19:47:37.197685 containerd[1881]: time="2025-02-13T19:47:37.197425632Z" level=info msg="Container to stop \"9fca3fad9d9525527ee0eaeb6d7e1080f5d013d37c55ef83eb45a780d14ea5a7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:47:37.197685 containerd[1881]: time="2025-02-13T19:47:37.197513371Z" level=info msg="Container to stop \"c7146053f75993aa2dff078f910e66f300ffbc29e2aeff144ef534514a090693\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:47:37.197685 containerd[1881]: time="2025-02-13T19:47:37.197530030Z" level=info msg="Container to stop \"ae8b59129c08c70a9bcbc5e0280a652931be5528d48881273824b082554f59b8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:47:37.197685 containerd[1881]: time="2025-02-13T19:47:37.197543162Z" level=info msg="Container to stop \"66a6d8ab6a062e5237284922d84319ca210359a8f50b1c676fae87f3c1ae25cf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:47:37.197685 containerd[1881]: time="2025-02-13T19:47:37.197560188Z" level=info msg="Container to stop \"038b176447ef7aa8eeb8eddd88353740e0aefb5a2ca5abe17e87d337382f9a34\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:47:37.199271 containerd[1881]: time="2025-02-13T19:47:37.199215530Z" level=info msg="StopPodSandbox for \"4fc44529e8006271372add233fd6e6a17f88f3333e1be8786c1c5a3d8f96b099\"" Feb 13 19:47:37.199397 containerd[1881]: time="2025-02-13T19:47:37.199290108Z" level=info msg="Container to stop \"5193b7ddb379998eae05ab2aec28acbb5e9b289791b91c23ee09ae18b02e581a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:47:37.215275 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4fc44529e8006271372add233fd6e6a17f88f3333e1be8786c1c5a3d8f96b099-shm.mount: Deactivated successfully. Feb 13 19:47:37.216103 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ff9052e6263d5772efd8c429ef3f36c7a4ab5aba8f11cbfdd5a97037796c2a3a-shm.mount: Deactivated successfully. Feb 13 19:47:37.241434 systemd[1]: cri-containerd-ff9052e6263d5772efd8c429ef3f36c7a4ab5aba8f11cbfdd5a97037796c2a3a.scope: Deactivated successfully. Feb 13 19:47:37.251043 systemd[1]: cri-containerd-4fc44529e8006271372add233fd6e6a17f88f3333e1be8786c1c5a3d8f96b099.scope: Deactivated successfully. Feb 13 19:47:37.315798 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff9052e6263d5772efd8c429ef3f36c7a4ab5aba8f11cbfdd5a97037796c2a3a-rootfs.mount: Deactivated successfully. Feb 13 19:47:37.328741 containerd[1881]: time="2025-02-13T19:47:37.328665017Z" level=info msg="shim disconnected" id=ff9052e6263d5772efd8c429ef3f36c7a4ab5aba8f11cbfdd5a97037796c2a3a namespace=k8s.io Feb 13 19:47:37.329390 containerd[1881]: time="2025-02-13T19:47:37.328924005Z" level=warning msg="cleaning up after shim disconnected" id=ff9052e6263d5772efd8c429ef3f36c7a4ab5aba8f11cbfdd5a97037796c2a3a namespace=k8s.io Feb 13 19:47:37.329390 containerd[1881]: time="2025-02-13T19:47:37.328944579Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:47:37.376964 containerd[1881]: time="2025-02-13T19:47:37.376912738Z" level=info msg="TearDown network for sandbox \"ff9052e6263d5772efd8c429ef3f36c7a4ab5aba8f11cbfdd5a97037796c2a3a\" successfully" Feb 13 19:47:37.376964 containerd[1881]: time="2025-02-13T19:47:37.376966523Z" level=info msg="StopPodSandbox for \"ff9052e6263d5772efd8c429ef3f36c7a4ab5aba8f11cbfdd5a97037796c2a3a\" returns successfully" Feb 13 19:47:37.412928 containerd[1881]: time="2025-02-13T19:47:37.407462551Z" level=info msg="shim disconnected" id=4fc44529e8006271372add233fd6e6a17f88f3333e1be8786c1c5a3d8f96b099 namespace=k8s.io Feb 13 19:47:37.412928 containerd[1881]: time="2025-02-13T19:47:37.407547883Z" level=warning msg="cleaning up after shim disconnected" id=4fc44529e8006271372add233fd6e6a17f88f3333e1be8786c1c5a3d8f96b099 namespace=k8s.io Feb 13 19:47:37.412928 containerd[1881]: time="2025-02-13T19:47:37.407561192Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:47:37.472800 containerd[1881]: time="2025-02-13T19:47:37.471551218Z" level=info msg="TearDown network for sandbox \"4fc44529e8006271372add233fd6e6a17f88f3333e1be8786c1c5a3d8f96b099\" successfully" Feb 13 19:47:37.472800 containerd[1881]: time="2025-02-13T19:47:37.471595377Z" level=info msg="StopPodSandbox for \"4fc44529e8006271372add233fd6e6a17f88f3333e1be8786c1c5a3d8f96b099\" returns successfully" Feb 13 19:47:37.540095 kubelet[3394]: I0213 19:47:37.538677 3394 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/15320574-c84b-487c-aaed-c139223db200-bpf-maps\") pod \"15320574-c84b-487c-aaed-c139223db200\" (UID: \"15320574-c84b-487c-aaed-c139223db200\") " Feb 13 19:47:37.540095 kubelet[3394]: I0213 19:47:37.538877 3394 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwmc2\" (UniqueName: \"kubernetes.io/projected/15320574-c84b-487c-aaed-c139223db200-kube-api-access-xwmc2\") pod \"15320574-c84b-487c-aaed-c139223db200\" (UID: \"15320574-c84b-487c-aaed-c139223db200\") " Feb 13 19:47:37.540095 kubelet[3394]: I0213 19:47:37.538944 3394 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/15320574-c84b-487c-aaed-c139223db200-cilium-run\") pod \"15320574-c84b-487c-aaed-c139223db200\" (UID: \"15320574-c84b-487c-aaed-c139223db200\") " Feb 13 19:47:37.540095 kubelet[3394]: I0213 19:47:37.538971 3394 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15320574-c84b-487c-aaed-c139223db200-xtables-lock\") pod \"15320574-c84b-487c-aaed-c139223db200\" (UID: \"15320574-c84b-487c-aaed-c139223db200\") " Feb 13 19:47:37.540095 kubelet[3394]: I0213 19:47:37.539031 3394 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/15320574-c84b-487c-aaed-c139223db200-hubble-tls\") pod \"15320574-c84b-487c-aaed-c139223db200\" (UID: \"15320574-c84b-487c-aaed-c139223db200\") " Feb 13 19:47:37.540095 kubelet[3394]: I0213 19:47:37.539056 3394 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/15320574-c84b-487c-aaed-c139223db200-etc-cni-netd\") pod \"15320574-c84b-487c-aaed-c139223db200\" (UID: \"15320574-c84b-487c-aaed-c139223db200\") " Feb 13 19:47:37.541107 kubelet[3394]: I0213 19:47:37.539240 3394 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/15320574-c84b-487c-aaed-c139223db200-cilium-cgroup\") pod \"15320574-c84b-487c-aaed-c139223db200\" (UID: \"15320574-c84b-487c-aaed-c139223db200\") " Feb 13 19:47:37.541107 kubelet[3394]: I0213 19:47:37.539303 3394 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/15320574-c84b-487c-aaed-c139223db200-cilium-config-path\") pod \"15320574-c84b-487c-aaed-c139223db200\" (UID: \"15320574-c84b-487c-aaed-c139223db200\") " Feb 13 19:47:37.541107 kubelet[3394]: I0213 19:47:37.539331 3394 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/15320574-c84b-487c-aaed-c139223db200-host-proc-sys-kernel\") pod \"15320574-c84b-487c-aaed-c139223db200\" (UID: \"15320574-c84b-487c-aaed-c139223db200\") " Feb 13 19:47:37.541107 kubelet[3394]: I0213 19:47:37.539383 3394 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/15320574-c84b-487c-aaed-c139223db200-cni-path\") pod \"15320574-c84b-487c-aaed-c139223db200\" (UID: \"15320574-c84b-487c-aaed-c139223db200\") " Feb 13 19:47:37.541107 kubelet[3394]: I0213 19:47:37.539412 3394 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/15320574-c84b-487c-aaed-c139223db200-clustermesh-secrets\") pod \"15320574-c84b-487c-aaed-c139223db200\" (UID: \"15320574-c84b-487c-aaed-c139223db200\") " Feb 13 19:47:37.541107 kubelet[3394]: I0213 19:47:37.539435 3394 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/15320574-c84b-487c-aaed-c139223db200-hostproc\") pod \"15320574-c84b-487c-aaed-c139223db200\" (UID: \"15320574-c84b-487c-aaed-c139223db200\") " Feb 13 19:47:37.541386 kubelet[3394]: I0213 19:47:37.539490 3394 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/15320574-c84b-487c-aaed-c139223db200-host-proc-sys-net\") pod \"15320574-c84b-487c-aaed-c139223db200\" (UID: \"15320574-c84b-487c-aaed-c139223db200\") " Feb 13 19:47:37.541386 kubelet[3394]: I0213 19:47:37.539516 3394 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15320574-c84b-487c-aaed-c139223db200-lib-modules\") pod \"15320574-c84b-487c-aaed-c139223db200\" (UID: \"15320574-c84b-487c-aaed-c139223db200\") " Feb 13 19:47:37.542700 kubelet[3394]: I0213 19:47:37.542436 3394 scope.go:117] "RemoveContainer" containerID="038b176447ef7aa8eeb8eddd88353740e0aefb5a2ca5abe17e87d337382f9a34" Feb 13 19:47:37.549176 kubelet[3394]: I0213 19:47:37.547293 3394 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15320574-c84b-487c-aaed-c139223db200-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "15320574-c84b-487c-aaed-c139223db200" (UID: "15320574-c84b-487c-aaed-c139223db200"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:47:37.549176 kubelet[3394]: I0213 19:47:37.546847 3394 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15320574-c84b-487c-aaed-c139223db200-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "15320574-c84b-487c-aaed-c139223db200" (UID: "15320574-c84b-487c-aaed-c139223db200"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:47:37.549176 kubelet[3394]: I0213 19:47:37.548915 3394 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15320574-c84b-487c-aaed-c139223db200-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "15320574-c84b-487c-aaed-c139223db200" (UID: "15320574-c84b-487c-aaed-c139223db200"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:47:37.551461 kubelet[3394]: I0213 19:47:37.551414 3394 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15320574-c84b-487c-aaed-c139223db200-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "15320574-c84b-487c-aaed-c139223db200" (UID: "15320574-c84b-487c-aaed-c139223db200"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:47:37.551640 kubelet[3394]: I0213 19:47:37.551491 3394 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15320574-c84b-487c-aaed-c139223db200-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "15320574-c84b-487c-aaed-c139223db200" (UID: "15320574-c84b-487c-aaed-c139223db200"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:47:37.551640 kubelet[3394]: I0213 19:47:37.551515 3394 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15320574-c84b-487c-aaed-c139223db200-cni-path" (OuterVolumeSpecName: "cni-path") pod "15320574-c84b-487c-aaed-c139223db200" (UID: "15320574-c84b-487c-aaed-c139223db200"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:47:37.556516 kubelet[3394]: I0213 19:47:37.555894 3394 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15320574-c84b-487c-aaed-c139223db200-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "15320574-c84b-487c-aaed-c139223db200" (UID: "15320574-c84b-487c-aaed-c139223db200"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 19:47:37.556516 kubelet[3394]: I0213 19:47:37.555978 3394 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15320574-c84b-487c-aaed-c139223db200-hostproc" (OuterVolumeSpecName: "hostproc") pod "15320574-c84b-487c-aaed-c139223db200" (UID: "15320574-c84b-487c-aaed-c139223db200"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:47:37.556516 kubelet[3394]: I0213 19:47:37.556003 3394 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15320574-c84b-487c-aaed-c139223db200-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "15320574-c84b-487c-aaed-c139223db200" (UID: "15320574-c84b-487c-aaed-c139223db200"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:47:37.557048 kubelet[3394]: I0213 19:47:37.557016 3394 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15320574-c84b-487c-aaed-c139223db200-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "15320574-c84b-487c-aaed-c139223db200" (UID: "15320574-c84b-487c-aaed-c139223db200"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:47:37.557284 kubelet[3394]: I0213 19:47:37.557099 3394 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15320574-c84b-487c-aaed-c139223db200-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "15320574-c84b-487c-aaed-c139223db200" (UID: "15320574-c84b-487c-aaed-c139223db200"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:47:37.558700 kubelet[3394]: I0213 19:47:37.557696 3394 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15320574-c84b-487c-aaed-c139223db200-kube-api-access-xwmc2" (OuterVolumeSpecName: "kube-api-access-xwmc2") pod "15320574-c84b-487c-aaed-c139223db200" (UID: "15320574-c84b-487c-aaed-c139223db200"). InnerVolumeSpecName "kube-api-access-xwmc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:47:37.558700 kubelet[3394]: I0213 19:47:37.557754 3394 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15320574-c84b-487c-aaed-c139223db200-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "15320574-c84b-487c-aaed-c139223db200" (UID: "15320574-c84b-487c-aaed-c139223db200"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:47:37.570152 kubelet[3394]: I0213 19:47:37.570095 3394 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15320574-c84b-487c-aaed-c139223db200-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "15320574-c84b-487c-aaed-c139223db200" (UID: "15320574-c84b-487c-aaed-c139223db200"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:47:37.574440 containerd[1881]: time="2025-02-13T19:47:37.574380198Z" level=info msg="RemoveContainer for \"038b176447ef7aa8eeb8eddd88353740e0aefb5a2ca5abe17e87d337382f9a34\"" Feb 13 19:47:37.582242 containerd[1881]: time="2025-02-13T19:47:37.582130496Z" level=info msg="RemoveContainer for \"038b176447ef7aa8eeb8eddd88353740e0aefb5a2ca5abe17e87d337382f9a34\" returns successfully" Feb 13 19:47:37.583137 kubelet[3394]: I0213 19:47:37.583008 3394 scope.go:117] "RemoveContainer" containerID="c7146053f75993aa2dff078f910e66f300ffbc29e2aeff144ef534514a090693" Feb 13 19:47:37.584291 containerd[1881]: time="2025-02-13T19:47:37.584247931Z" level=info msg="RemoveContainer for \"c7146053f75993aa2dff078f910e66f300ffbc29e2aeff144ef534514a090693\"" Feb 13 19:47:37.587969 containerd[1881]: time="2025-02-13T19:47:37.587926817Z" level=info msg="RemoveContainer for \"c7146053f75993aa2dff078f910e66f300ffbc29e2aeff144ef534514a090693\" returns successfully" Feb 13 19:47:37.593321 kubelet[3394]: I0213 19:47:37.593264 3394 scope.go:117] "RemoveContainer" containerID="9fca3fad9d9525527ee0eaeb6d7e1080f5d013d37c55ef83eb45a780d14ea5a7" Feb 13 19:47:37.597250 containerd[1881]: time="2025-02-13T19:47:37.597215462Z" level=info msg="RemoveContainer for \"9fca3fad9d9525527ee0eaeb6d7e1080f5d013d37c55ef83eb45a780d14ea5a7\"" Feb 13 19:47:37.601415 containerd[1881]: time="2025-02-13T19:47:37.601167592Z" level=info msg="RemoveContainer for \"9fca3fad9d9525527ee0eaeb6d7e1080f5d013d37c55ef83eb45a780d14ea5a7\" returns successfully" Feb 13 19:47:37.603131 kubelet[3394]: I0213 19:47:37.603008 3394 scope.go:117] "RemoveContainer" containerID="66a6d8ab6a062e5237284922d84319ca210359a8f50b1c676fae87f3c1ae25cf" Feb 13 19:47:37.625730 containerd[1881]: time="2025-02-13T19:47:37.625663351Z" level=info msg="RemoveContainer for \"66a6d8ab6a062e5237284922d84319ca210359a8f50b1c676fae87f3c1ae25cf\"" Feb 13 19:47:37.637326 containerd[1881]: time="2025-02-13T19:47:37.637113299Z" level=info msg="RemoveContainer for \"66a6d8ab6a062e5237284922d84319ca210359a8f50b1c676fae87f3c1ae25cf\" returns successfully" Feb 13 19:47:37.638426 kubelet[3394]: I0213 19:47:37.637695 3394 scope.go:117] "RemoveContainer" containerID="ae8b59129c08c70a9bcbc5e0280a652931be5528d48881273824b082554f59b8" Feb 13 19:47:37.639899 kubelet[3394]: I0213 19:47:37.639823 3394 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c0597d5-da91-4a60-828b-034d34aa1071-cilium-config-path\") pod \"5c0597d5-da91-4a60-828b-034d34aa1071\" (UID: \"5c0597d5-da91-4a60-828b-034d34aa1071\") " Feb 13 19:47:37.640031 kubelet[3394]: I0213 19:47:37.639918 3394 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgm92\" (UniqueName: \"kubernetes.io/projected/5c0597d5-da91-4a60-828b-034d34aa1071-kube-api-access-vgm92\") pod \"5c0597d5-da91-4a60-828b-034d34aa1071\" (UID: \"5c0597d5-da91-4a60-828b-034d34aa1071\") " Feb 13 19:47:37.640031 kubelet[3394]: I0213 19:47:37.639987 3394 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/15320574-c84b-487c-aaed-c139223db200-host-proc-sys-net\") on node \"ip-172-31-19-60\" DevicePath \"\"" Feb 13 19:47:37.640031 kubelet[3394]: I0213 19:47:37.640002 3394 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15320574-c84b-487c-aaed-c139223db200-lib-modules\") on node \"ip-172-31-19-60\" DevicePath \"\"" Feb 13 19:47:37.640031 kubelet[3394]: I0213 19:47:37.640016 3394 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/15320574-c84b-487c-aaed-c139223db200-bpf-maps\") on node \"ip-172-31-19-60\" DevicePath \"\"" Feb 13 19:47:37.640031 kubelet[3394]: I0213 19:47:37.640027 3394 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-xwmc2\" (UniqueName: \"kubernetes.io/projected/15320574-c84b-487c-aaed-c139223db200-kube-api-access-xwmc2\") on node \"ip-172-31-19-60\" DevicePath \"\"" Feb 13 19:47:37.640243 kubelet[3394]: I0213 19:47:37.640039 3394 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/15320574-c84b-487c-aaed-c139223db200-cilium-run\") on node \"ip-172-31-19-60\" DevicePath \"\"" Feb 13 19:47:37.640243 kubelet[3394]: I0213 19:47:37.640053 3394 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15320574-c84b-487c-aaed-c139223db200-xtables-lock\") on node \"ip-172-31-19-60\" DevicePath \"\"" Feb 13 19:47:37.640243 kubelet[3394]: I0213 19:47:37.640065 3394 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/15320574-c84b-487c-aaed-c139223db200-hubble-tls\") on node \"ip-172-31-19-60\" DevicePath \"\"" Feb 13 19:47:37.640243 kubelet[3394]: I0213 19:47:37.640077 3394 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/15320574-c84b-487c-aaed-c139223db200-cilium-cgroup\") on node \"ip-172-31-19-60\" DevicePath \"\"" Feb 13 19:47:37.640243 kubelet[3394]: I0213 19:47:37.640089 3394 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/15320574-c84b-487c-aaed-c139223db200-cilium-config-path\") on node \"ip-172-31-19-60\" DevicePath \"\"" Feb 13 19:47:37.640243 kubelet[3394]: I0213 19:47:37.640104 3394 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/15320574-c84b-487c-aaed-c139223db200-host-proc-sys-kernel\") on node \"ip-172-31-19-60\" DevicePath \"\"" Feb 13 19:47:37.640243 kubelet[3394]: I0213 19:47:37.640116 3394 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/15320574-c84b-487c-aaed-c139223db200-etc-cni-netd\") on node \"ip-172-31-19-60\" DevicePath \"\"" Feb 13 19:47:37.640243 kubelet[3394]: I0213 19:47:37.640128 3394 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/15320574-c84b-487c-aaed-c139223db200-cni-path\") on node \"ip-172-31-19-60\" DevicePath \"\"" Feb 13 19:47:37.641317 kubelet[3394]: I0213 19:47:37.640142 3394 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/15320574-c84b-487c-aaed-c139223db200-clustermesh-secrets\") on node \"ip-172-31-19-60\" DevicePath \"\"" Feb 13 19:47:37.641317 kubelet[3394]: I0213 19:47:37.640153 3394 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/15320574-c84b-487c-aaed-c139223db200-hostproc\") on node \"ip-172-31-19-60\" DevicePath \"\"" Feb 13 19:47:37.641410 containerd[1881]: time="2025-02-13T19:47:37.641257947Z" level=info msg="RemoveContainer for \"ae8b59129c08c70a9bcbc5e0280a652931be5528d48881273824b082554f59b8\"" Feb 13 19:47:37.650358 kubelet[3394]: I0213 19:47:37.650313 3394 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c0597d5-da91-4a60-828b-034d34aa1071-kube-api-access-vgm92" (OuterVolumeSpecName: "kube-api-access-vgm92") pod "5c0597d5-da91-4a60-828b-034d34aa1071" (UID: "5c0597d5-da91-4a60-828b-034d34aa1071"). InnerVolumeSpecName "kube-api-access-vgm92". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:47:37.651189 containerd[1881]: time="2025-02-13T19:47:37.651150010Z" level=info msg="RemoveContainer for \"ae8b59129c08c70a9bcbc5e0280a652931be5528d48881273824b082554f59b8\" returns successfully" Feb 13 19:47:37.651609 kubelet[3394]: I0213 19:47:37.651584 3394 scope.go:117] "RemoveContainer" containerID="038b176447ef7aa8eeb8eddd88353740e0aefb5a2ca5abe17e87d337382f9a34" Feb 13 19:47:37.653029 containerd[1881]: time="2025-02-13T19:47:37.652983884Z" level=error msg="ContainerStatus for \"038b176447ef7aa8eeb8eddd88353740e0aefb5a2ca5abe17e87d337382f9a34\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"038b176447ef7aa8eeb8eddd88353740e0aefb5a2ca5abe17e87d337382f9a34\": not found" Feb 13 19:47:37.654477 kubelet[3394]: I0213 19:47:37.654325 3394 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c0597d5-da91-4a60-828b-034d34aa1071-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5c0597d5-da91-4a60-828b-034d34aa1071" (UID: "5c0597d5-da91-4a60-828b-034d34aa1071"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:47:37.658156 kubelet[3394]: E0213 19:47:37.658069 3394 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"038b176447ef7aa8eeb8eddd88353740e0aefb5a2ca5abe17e87d337382f9a34\": not found" containerID="038b176447ef7aa8eeb8eddd88353740e0aefb5a2ca5abe17e87d337382f9a34" Feb 13 19:47:37.664647 kubelet[3394]: I0213 19:47:37.664435 3394 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"038b176447ef7aa8eeb8eddd88353740e0aefb5a2ca5abe17e87d337382f9a34"} err="failed to get container status \"038b176447ef7aa8eeb8eddd88353740e0aefb5a2ca5abe17e87d337382f9a34\": rpc error: code = NotFound desc = an error occurred when try to find container \"038b176447ef7aa8eeb8eddd88353740e0aefb5a2ca5abe17e87d337382f9a34\": not found" Feb 13 19:47:37.664647 kubelet[3394]: I0213 19:47:37.664554 3394 scope.go:117] "RemoveContainer" containerID="c7146053f75993aa2dff078f910e66f300ffbc29e2aeff144ef534514a090693" Feb 13 19:47:37.667140 containerd[1881]: time="2025-02-13T19:47:37.667093782Z" level=error msg="ContainerStatus for \"c7146053f75993aa2dff078f910e66f300ffbc29e2aeff144ef534514a090693\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c7146053f75993aa2dff078f910e66f300ffbc29e2aeff144ef534514a090693\": not found" Feb 13 19:47:37.668875 kubelet[3394]: E0213 19:47:37.668299 3394 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c7146053f75993aa2dff078f910e66f300ffbc29e2aeff144ef534514a090693\": not found" containerID="c7146053f75993aa2dff078f910e66f300ffbc29e2aeff144ef534514a090693" Feb 13 19:47:37.668875 kubelet[3394]: I0213 19:47:37.668347 3394 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c7146053f75993aa2dff078f910e66f300ffbc29e2aeff144ef534514a090693"} err="failed to get container status \"c7146053f75993aa2dff078f910e66f300ffbc29e2aeff144ef534514a090693\": rpc error: code = NotFound desc = an error occurred when try to find container \"c7146053f75993aa2dff078f910e66f300ffbc29e2aeff144ef534514a090693\": not found" Feb 13 19:47:37.668875 kubelet[3394]: I0213 19:47:37.668382 3394 scope.go:117] "RemoveContainer" containerID="9fca3fad9d9525527ee0eaeb6d7e1080f5d013d37c55ef83eb45a780d14ea5a7" Feb 13 19:47:37.669554 containerd[1881]: time="2025-02-13T19:47:37.668691763Z" level=error msg="ContainerStatus for \"9fca3fad9d9525527ee0eaeb6d7e1080f5d013d37c55ef83eb45a780d14ea5a7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9fca3fad9d9525527ee0eaeb6d7e1080f5d013d37c55ef83eb45a780d14ea5a7\": not found" Feb 13 19:47:37.669671 kubelet[3394]: E0213 19:47:37.669235 3394 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9fca3fad9d9525527ee0eaeb6d7e1080f5d013d37c55ef83eb45a780d14ea5a7\": not found" containerID="9fca3fad9d9525527ee0eaeb6d7e1080f5d013d37c55ef83eb45a780d14ea5a7" Feb 13 19:47:37.669671 kubelet[3394]: I0213 19:47:37.669276 3394 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9fca3fad9d9525527ee0eaeb6d7e1080f5d013d37c55ef83eb45a780d14ea5a7"} err="failed to get container status \"9fca3fad9d9525527ee0eaeb6d7e1080f5d013d37c55ef83eb45a780d14ea5a7\": rpc error: code = NotFound desc = an error occurred when try to find container \"9fca3fad9d9525527ee0eaeb6d7e1080f5d013d37c55ef83eb45a780d14ea5a7\": not found" Feb 13 19:47:37.669671 kubelet[3394]: I0213 19:47:37.669302 3394 scope.go:117] "RemoveContainer" containerID="66a6d8ab6a062e5237284922d84319ca210359a8f50b1c676fae87f3c1ae25cf" Feb 13 19:47:37.670194 containerd[1881]: time="2025-02-13T19:47:37.670091938Z" level=error msg="ContainerStatus for \"66a6d8ab6a062e5237284922d84319ca210359a8f50b1c676fae87f3c1ae25cf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"66a6d8ab6a062e5237284922d84319ca210359a8f50b1c676fae87f3c1ae25cf\": not found" Feb 13 19:47:37.670482 kubelet[3394]: E0213 19:47:37.670282 3394 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"66a6d8ab6a062e5237284922d84319ca210359a8f50b1c676fae87f3c1ae25cf\": not found" containerID="66a6d8ab6a062e5237284922d84319ca210359a8f50b1c676fae87f3c1ae25cf" Feb 13 19:47:37.670482 kubelet[3394]: I0213 19:47:37.670386 3394 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"66a6d8ab6a062e5237284922d84319ca210359a8f50b1c676fae87f3c1ae25cf"} err="failed to get container status \"66a6d8ab6a062e5237284922d84319ca210359a8f50b1c676fae87f3c1ae25cf\": rpc error: code = NotFound desc = an error occurred when try to find container \"66a6d8ab6a062e5237284922d84319ca210359a8f50b1c676fae87f3c1ae25cf\": not found" Feb 13 19:47:37.670482 kubelet[3394]: I0213 19:47:37.670405 3394 scope.go:117] "RemoveContainer" containerID="ae8b59129c08c70a9bcbc5e0280a652931be5528d48881273824b082554f59b8" Feb 13 19:47:37.670897 containerd[1881]: time="2025-02-13T19:47:37.670659486Z" level=error msg="ContainerStatus for \"ae8b59129c08c70a9bcbc5e0280a652931be5528d48881273824b082554f59b8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ae8b59129c08c70a9bcbc5e0280a652931be5528d48881273824b082554f59b8\": not found" Feb 13 19:47:37.671091 kubelet[3394]: E0213 19:47:37.671039 3394 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ae8b59129c08c70a9bcbc5e0280a652931be5528d48881273824b082554f59b8\": not found" containerID="ae8b59129c08c70a9bcbc5e0280a652931be5528d48881273824b082554f59b8" Feb 13 19:47:37.671091 kubelet[3394]: I0213 19:47:37.671068 3394 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ae8b59129c08c70a9bcbc5e0280a652931be5528d48881273824b082554f59b8"} err="failed to get container status \"ae8b59129c08c70a9bcbc5e0280a652931be5528d48881273824b082554f59b8\": rpc error: code = NotFound desc = an error occurred when try to find container \"ae8b59129c08c70a9bcbc5e0280a652931be5528d48881273824b082554f59b8\": not found" Feb 13 19:47:37.740771 kubelet[3394]: I0213 19:47:37.740705 3394 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c0597d5-da91-4a60-828b-034d34aa1071-cilium-config-path\") on node \"ip-172-31-19-60\" DevicePath \"\"" Feb 13 19:47:37.740771 kubelet[3394]: I0213 19:47:37.740746 3394 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-vgm92\" (UniqueName: \"kubernetes.io/projected/5c0597d5-da91-4a60-828b-034d34aa1071-kube-api-access-vgm92\") on node \"ip-172-31-19-60\" DevicePath \"\"" Feb 13 19:47:37.832370 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4fc44529e8006271372add233fd6e6a17f88f3333e1be8786c1c5a3d8f96b099-rootfs.mount: Deactivated successfully. Feb 13 19:47:37.832507 systemd[1]: var-lib-kubelet-pods-15320574\x2dc84b\x2d487c\x2daaed\x2dc139223db200-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 19:47:37.832602 systemd[1]: var-lib-kubelet-pods-15320574\x2dc84b\x2d487c\x2daaed\x2dc139223db200-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 19:47:37.832697 systemd[1]: var-lib-kubelet-pods-5c0597d5\x2dda91\x2d4a60\x2d828b\x2d034d34aa1071-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvgm92.mount: Deactivated successfully. Feb 13 19:47:37.832806 systemd[1]: var-lib-kubelet-pods-15320574\x2dc84b\x2d487c\x2daaed\x2dc139223db200-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxwmc2.mount: Deactivated successfully. Feb 13 19:47:38.465308 kubelet[3394]: I0213 19:47:38.465265 3394 scope.go:117] "RemoveContainer" containerID="5193b7ddb379998eae05ab2aec28acbb5e9b289791b91c23ee09ae18b02e581a" Feb 13 19:47:38.487167 systemd[1]: Removed slice kubepods-besteffort-pod5c0597d5_da91_4a60_828b_034d34aa1071.slice - libcontainer container kubepods-besteffort-pod5c0597d5_da91_4a60_828b_034d34aa1071.slice. Feb 13 19:47:38.503719 containerd[1881]: time="2025-02-13T19:47:38.503556152Z" level=info msg="RemoveContainer for \"5193b7ddb379998eae05ab2aec28acbb5e9b289791b91c23ee09ae18b02e581a\"" Feb 13 19:47:38.510359 systemd[1]: Removed slice kubepods-burstable-pod15320574_c84b_487c_aaed_c139223db200.slice - libcontainer container kubepods-burstable-pod15320574_c84b_487c_aaed_c139223db200.slice. Feb 13 19:47:38.511403 systemd[1]: kubepods-burstable-pod15320574_c84b_487c_aaed_c139223db200.slice: Consumed 9.790s CPU time. Feb 13 19:47:38.519947 containerd[1881]: time="2025-02-13T19:47:38.519756043Z" level=info msg="RemoveContainer for \"5193b7ddb379998eae05ab2aec28acbb5e9b289791b91c23ee09ae18b02e581a\" returns successfully" Feb 13 19:47:38.641709 sshd[5007]: Connection closed by 139.178.89.65 port 52220 Feb 13 19:47:38.645017 sshd-session[5005]: pam_unix(sshd:session): session closed for user core Feb 13 19:47:38.652162 systemd[1]: sshd@24-172.31.19.60:22-139.178.89.65:52220.service: Deactivated successfully. Feb 13 19:47:38.657398 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:47:38.660514 systemd-logind[1873]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:47:38.662932 systemd-logind[1873]: Removed session 25. Feb 13 19:47:38.690468 systemd[1]: Started sshd@25-172.31.19.60:22-139.178.89.65:52226.service - OpenSSH per-connection server daemon (139.178.89.65:52226). Feb 13 19:47:38.884699 sshd[5169]: Accepted publickey for core from 139.178.89.65 port 52226 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:47:38.891163 sshd-session[5169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:47:38.912079 systemd-logind[1873]: New session 26 of user core. Feb 13 19:47:38.915368 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 19:47:39.917002 ntpd[1864]: Deleting interface #11 lxc_health, fe80::bc13:bdff:fe53:acc4%8#123, interface stats: received=0, sent=0, dropped=0, active_time=79 secs Feb 13 19:47:39.917455 ntpd[1864]: 13 Feb 19:47:39 ntpd[1864]: Deleting interface #11 lxc_health, fe80::bc13:bdff:fe53:acc4%8#123, interface stats: received=0, sent=0, dropped=0, active_time=79 secs Feb 13 19:47:40.253638 sshd[5171]: Connection closed by 139.178.89.65 port 52226 Feb 13 19:47:40.256528 sshd-session[5169]: pam_unix(sshd:session): session closed for user core Feb 13 19:47:40.268091 systemd[1]: sshd@25-172.31.19.60:22-139.178.89.65:52226.service: Deactivated successfully. Feb 13 19:47:40.268682 systemd-logind[1873]: Session 26 logged out. Waiting for processes to exit. Feb 13 19:47:40.281364 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 19:47:40.308700 kubelet[3394]: E0213 19:47:40.308657 3394 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="15320574-c84b-487c-aaed-c139223db200" containerName="mount-cgroup" Feb 13 19:47:40.309802 kubelet[3394]: E0213 19:47:40.309365 3394 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5c0597d5-da91-4a60-828b-034d34aa1071" containerName="cilium-operator" Feb 13 19:47:40.309928 kubelet[3394]: E0213 19:47:40.309913 3394 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="15320574-c84b-487c-aaed-c139223db200" containerName="cilium-agent" Feb 13 19:47:40.310255 kubelet[3394]: E0213 19:47:40.310198 3394 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="15320574-c84b-487c-aaed-c139223db200" containerName="apply-sysctl-overwrites" Feb 13 19:47:40.310454 kubelet[3394]: E0213 19:47:40.310435 3394 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="15320574-c84b-487c-aaed-c139223db200" containerName="mount-bpf-fs" Feb 13 19:47:40.311245 kubelet[3394]: E0213 19:47:40.310537 3394 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="15320574-c84b-487c-aaed-c139223db200" containerName="clean-cilium-state" Feb 13 19:47:40.311890 kubelet[3394]: I0213 19:47:40.311812 3394 memory_manager.go:354] "RemoveStaleState removing state" podUID="15320574-c84b-487c-aaed-c139223db200" containerName="cilium-agent" Feb 13 19:47:40.312333 kubelet[3394]: I0213 19:47:40.312300 3394 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c0597d5-da91-4a60-828b-034d34aa1071" containerName="cilium-operator" Feb 13 19:47:40.334936 systemd[1]: Started sshd@26-172.31.19.60:22-139.178.89.65:52232.service - OpenSSH per-connection server daemon (139.178.89.65:52232). Feb 13 19:47:40.339199 systemd-logind[1873]: Removed session 26. Feb 13 19:47:40.420789 systemd[1]: Created slice kubepods-burstable-pod60a6c545_339a_43d4_97d3_e7cc7a128ca0.slice - libcontainer container kubepods-burstable-pod60a6c545_339a_43d4_97d3_e7cc7a128ca0.slice. Feb 13 19:47:40.501464 kubelet[3394]: I0213 19:47:40.501413 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/60a6c545-339a-43d4-97d3-e7cc7a128ca0-etc-cni-netd\") pod \"cilium-c7whx\" (UID: \"60a6c545-339a-43d4-97d3-e7cc7a128ca0\") " pod="kube-system/cilium-c7whx" Feb 13 19:47:40.501646 kubelet[3394]: I0213 19:47:40.501477 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/60a6c545-339a-43d4-97d3-e7cc7a128ca0-clustermesh-secrets\") pod \"cilium-c7whx\" (UID: \"60a6c545-339a-43d4-97d3-e7cc7a128ca0\") " pod="kube-system/cilium-c7whx" Feb 13 19:47:40.501646 kubelet[3394]: I0213 19:47:40.501504 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/60a6c545-339a-43d4-97d3-e7cc7a128ca0-host-proc-sys-net\") pod \"cilium-c7whx\" (UID: \"60a6c545-339a-43d4-97d3-e7cc7a128ca0\") " pod="kube-system/cilium-c7whx" Feb 13 19:47:40.501646 kubelet[3394]: I0213 19:47:40.501526 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/60a6c545-339a-43d4-97d3-e7cc7a128ca0-bpf-maps\") pod \"cilium-c7whx\" (UID: \"60a6c545-339a-43d4-97d3-e7cc7a128ca0\") " pod="kube-system/cilium-c7whx" Feb 13 19:47:40.501646 kubelet[3394]: I0213 19:47:40.501549 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/60a6c545-339a-43d4-97d3-e7cc7a128ca0-cilium-cgroup\") pod \"cilium-c7whx\" (UID: \"60a6c545-339a-43d4-97d3-e7cc7a128ca0\") " pod="kube-system/cilium-c7whx" Feb 13 19:47:40.501646 kubelet[3394]: I0213 19:47:40.501569 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/60a6c545-339a-43d4-97d3-e7cc7a128ca0-cni-path\") pod \"cilium-c7whx\" (UID: \"60a6c545-339a-43d4-97d3-e7cc7a128ca0\") " pod="kube-system/cilium-c7whx" Feb 13 19:47:40.501646 kubelet[3394]: I0213 19:47:40.501590 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/60a6c545-339a-43d4-97d3-e7cc7a128ca0-host-proc-sys-kernel\") pod \"cilium-c7whx\" (UID: \"60a6c545-339a-43d4-97d3-e7cc7a128ca0\") " pod="kube-system/cilium-c7whx" Feb 13 19:47:40.501942 kubelet[3394]: I0213 19:47:40.501612 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcl6r\" (UniqueName: \"kubernetes.io/projected/60a6c545-339a-43d4-97d3-e7cc7a128ca0-kube-api-access-bcl6r\") pod \"cilium-c7whx\" (UID: \"60a6c545-339a-43d4-97d3-e7cc7a128ca0\") " pod="kube-system/cilium-c7whx" Feb 13 19:47:40.501942 kubelet[3394]: I0213 19:47:40.501636 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/60a6c545-339a-43d4-97d3-e7cc7a128ca0-cilium-ipsec-secrets\") pod \"cilium-c7whx\" (UID: \"60a6c545-339a-43d4-97d3-e7cc7a128ca0\") " pod="kube-system/cilium-c7whx" Feb 13 19:47:40.501942 kubelet[3394]: I0213 19:47:40.501663 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/60a6c545-339a-43d4-97d3-e7cc7a128ca0-hostproc\") pod \"cilium-c7whx\" (UID: \"60a6c545-339a-43d4-97d3-e7cc7a128ca0\") " pod="kube-system/cilium-c7whx" Feb 13 19:47:40.501942 kubelet[3394]: I0213 19:47:40.501687 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60a6c545-339a-43d4-97d3-e7cc7a128ca0-xtables-lock\") pod \"cilium-c7whx\" (UID: \"60a6c545-339a-43d4-97d3-e7cc7a128ca0\") " pod="kube-system/cilium-c7whx" Feb 13 19:47:40.501942 kubelet[3394]: I0213 19:47:40.501715 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60a6c545-339a-43d4-97d3-e7cc7a128ca0-lib-modules\") pod \"cilium-c7whx\" (UID: \"60a6c545-339a-43d4-97d3-e7cc7a128ca0\") " pod="kube-system/cilium-c7whx" Feb 13 19:47:40.501942 kubelet[3394]: I0213 19:47:40.501744 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/60a6c545-339a-43d4-97d3-e7cc7a128ca0-cilium-config-path\") pod \"cilium-c7whx\" (UID: \"60a6c545-339a-43d4-97d3-e7cc7a128ca0\") " pod="kube-system/cilium-c7whx" Feb 13 19:47:40.502249 kubelet[3394]: I0213 19:47:40.501790 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/60a6c545-339a-43d4-97d3-e7cc7a128ca0-cilium-run\") pod \"cilium-c7whx\" (UID: \"60a6c545-339a-43d4-97d3-e7cc7a128ca0\") " pod="kube-system/cilium-c7whx" Feb 13 19:47:40.502249 kubelet[3394]: I0213 19:47:40.501818 3394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/60a6c545-339a-43d4-97d3-e7cc7a128ca0-hubble-tls\") pod \"cilium-c7whx\" (UID: \"60a6c545-339a-43d4-97d3-e7cc7a128ca0\") " pod="kube-system/cilium-c7whx" Feb 13 19:47:40.527189 kubelet[3394]: I0213 19:47:40.523524 3394 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15320574-c84b-487c-aaed-c139223db200" path="/var/lib/kubelet/pods/15320574-c84b-487c-aaed-c139223db200/volumes" Feb 13 19:47:40.530177 kubelet[3394]: I0213 19:47:40.530086 3394 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c0597d5-da91-4a60-828b-034d34aa1071" path="/var/lib/kubelet/pods/5c0597d5-da91-4a60-828b-034d34aa1071/volumes" Feb 13 19:47:40.622622 sshd[5181]: Accepted publickey for core from 139.178.89.65 port 52232 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:47:40.624891 sshd-session[5181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:47:40.652862 systemd-logind[1873]: New session 27 of user core. Feb 13 19:47:40.658115 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 19:47:40.741713 containerd[1881]: time="2025-02-13T19:47:40.741670417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c7whx,Uid:60a6c545-339a-43d4-97d3-e7cc7a128ca0,Namespace:kube-system,Attempt:0,}" Feb 13 19:47:40.776974 sshd[5187]: Connection closed by 139.178.89.65 port 52232 Feb 13 19:47:40.780119 sshd-session[5181]: pam_unix(sshd:session): session closed for user core Feb 13 19:47:40.784422 containerd[1881]: time="2025-02-13T19:47:40.784072155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:47:40.784422 containerd[1881]: time="2025-02-13T19:47:40.784144172Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:47:40.784422 containerd[1881]: time="2025-02-13T19:47:40.784162623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:47:40.784422 containerd[1881]: time="2025-02-13T19:47:40.784275656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:47:40.791593 systemd[1]: sshd@26-172.31.19.60:22-139.178.89.65:52232.service: Deactivated successfully. Feb 13 19:47:40.835861 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 19:47:40.843373 systemd-logind[1873]: Session 27 logged out. Waiting for processes to exit. Feb 13 19:47:40.857983 systemd-logind[1873]: Removed session 27. Feb 13 19:47:40.863992 systemd[1]: Started cri-containerd-5219c19bc3790776edfd8b3f85dc3ba0830e9b712030aed228de8d6e76650c03.scope - libcontainer container 5219c19bc3790776edfd8b3f85dc3ba0830e9b712030aed228de8d6e76650c03. Feb 13 19:47:40.872669 systemd[1]: Started sshd@27-172.31.19.60:22-139.178.89.65:52240.service - OpenSSH per-connection server daemon (139.178.89.65:52240). Feb 13 19:47:40.942011 containerd[1881]: time="2025-02-13T19:47:40.941761829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c7whx,Uid:60a6c545-339a-43d4-97d3-e7cc7a128ca0,Namespace:kube-system,Attempt:0,} returns sandbox id \"5219c19bc3790776edfd8b3f85dc3ba0830e9b712030aed228de8d6e76650c03\"" Feb 13 19:47:40.953424 containerd[1881]: time="2025-02-13T19:47:40.953161503Z" level=info msg="CreateContainer within sandbox \"5219c19bc3790776edfd8b3f85dc3ba0830e9b712030aed228de8d6e76650c03\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:47:40.994451 containerd[1881]: time="2025-02-13T19:47:40.994394126Z" level=info msg="CreateContainer within sandbox \"5219c19bc3790776edfd8b3f85dc3ba0830e9b712030aed228de8d6e76650c03\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9f73cc04f32a3a5a89057970e71305d64b59898bd9ed407653b26411dd3e915f\"" Feb 13 19:47:40.995583 containerd[1881]: time="2025-02-13T19:47:40.995463076Z" level=info msg="StartContainer for \"9f73cc04f32a3a5a89057970e71305d64b59898bd9ed407653b26411dd3e915f\"" Feb 13 19:47:41.063912 systemd[1]: Started cri-containerd-9f73cc04f32a3a5a89057970e71305d64b59898bd9ed407653b26411dd3e915f.scope - libcontainer container 9f73cc04f32a3a5a89057970e71305d64b59898bd9ed407653b26411dd3e915f. Feb 13 19:47:41.080400 sshd[5222]: Accepted publickey for core from 139.178.89.65 port 52240 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:47:41.081885 sshd-session[5222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:47:41.104214 systemd-logind[1873]: New session 28 of user core. Feb 13 19:47:41.112294 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 19:47:41.146440 containerd[1881]: time="2025-02-13T19:47:41.146288541Z" level=info msg="StartContainer for \"9f73cc04f32a3a5a89057970e71305d64b59898bd9ed407653b26411dd3e915f\" returns successfully" Feb 13 19:47:41.170566 systemd[1]: cri-containerd-9f73cc04f32a3a5a89057970e71305d64b59898bd9ed407653b26411dd3e915f.scope: Deactivated successfully. Feb 13 19:47:41.235688 containerd[1881]: time="2025-02-13T19:47:41.235317116Z" level=info msg="shim disconnected" id=9f73cc04f32a3a5a89057970e71305d64b59898bd9ed407653b26411dd3e915f namespace=k8s.io Feb 13 19:47:41.235688 containerd[1881]: time="2025-02-13T19:47:41.235379078Z" level=warning msg="cleaning up after shim disconnected" id=9f73cc04f32a3a5a89057970e71305d64b59898bd9ed407653b26411dd3e915f namespace=k8s.io Feb 13 19:47:41.235688 containerd[1881]: time="2025-02-13T19:47:41.235388575Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:47:41.530237 containerd[1881]: time="2025-02-13T19:47:41.530013744Z" level=info msg="CreateContainer within sandbox \"5219c19bc3790776edfd8b3f85dc3ba0830e9b712030aed228de8d6e76650c03\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:47:41.561904 containerd[1881]: time="2025-02-13T19:47:41.561088388Z" level=info msg="CreateContainer within sandbox \"5219c19bc3790776edfd8b3f85dc3ba0830e9b712030aed228de8d6e76650c03\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"00e6e70fae42db4a1bfe91945bf7499c3326ffaa8a0618bb745ef698bbb82a1e\"" Feb 13 19:47:41.566480 containerd[1881]: time="2025-02-13T19:47:41.563728414Z" level=info msg="StartContainer for \"00e6e70fae42db4a1bfe91945bf7499c3326ffaa8a0618bb745ef698bbb82a1e\"" Feb 13 19:47:41.627326 systemd[1]: Started cri-containerd-00e6e70fae42db4a1bfe91945bf7499c3326ffaa8a0618bb745ef698bbb82a1e.scope - libcontainer container 00e6e70fae42db4a1bfe91945bf7499c3326ffaa8a0618bb745ef698bbb82a1e. Feb 13 19:47:41.696591 containerd[1881]: time="2025-02-13T19:47:41.696441283Z" level=info msg="StartContainer for \"00e6e70fae42db4a1bfe91945bf7499c3326ffaa8a0618bb745ef698bbb82a1e\" returns successfully" Feb 13 19:47:41.748759 kubelet[3394]: E0213 19:47:41.746933 3394 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:47:41.753305 systemd[1]: cri-containerd-00e6e70fae42db4a1bfe91945bf7499c3326ffaa8a0618bb745ef698bbb82a1e.scope: Deactivated successfully. Feb 13 19:47:41.811769 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00e6e70fae42db4a1bfe91945bf7499c3326ffaa8a0618bb745ef698bbb82a1e-rootfs.mount: Deactivated successfully. Feb 13 19:47:41.828715 containerd[1881]: time="2025-02-13T19:47:41.828647838Z" level=info msg="shim disconnected" id=00e6e70fae42db4a1bfe91945bf7499c3326ffaa8a0618bb745ef698bbb82a1e namespace=k8s.io Feb 13 19:47:41.828715 containerd[1881]: time="2025-02-13T19:47:41.828709160Z" level=warning msg="cleaning up after shim disconnected" id=00e6e70fae42db4a1bfe91945bf7499c3326ffaa8a0618bb745ef698bbb82a1e namespace=k8s.io Feb 13 19:47:41.828715 containerd[1881]: time="2025-02-13T19:47:41.828719603Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:47:42.535089 containerd[1881]: time="2025-02-13T19:47:42.535036131Z" level=info msg="CreateContainer within sandbox \"5219c19bc3790776edfd8b3f85dc3ba0830e9b712030aed228de8d6e76650c03\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:47:42.600669 containerd[1881]: time="2025-02-13T19:47:42.600603522Z" level=info msg="CreateContainer within sandbox \"5219c19bc3790776edfd8b3f85dc3ba0830e9b712030aed228de8d6e76650c03\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e7eaa4b28a3a6ea5a3456a662e0b190e29c0f1451fc1b9fd87bb8fd11d56de7b\"" Feb 13 19:47:42.602208 containerd[1881]: time="2025-02-13T19:47:42.601550258Z" level=info msg="StartContainer for \"e7eaa4b28a3a6ea5a3456a662e0b190e29c0f1451fc1b9fd87bb8fd11d56de7b\"" Feb 13 19:47:42.674183 systemd[1]: Started cri-containerd-e7eaa4b28a3a6ea5a3456a662e0b190e29c0f1451fc1b9fd87bb8fd11d56de7b.scope - libcontainer container e7eaa4b28a3a6ea5a3456a662e0b190e29c0f1451fc1b9fd87bb8fd11d56de7b. Feb 13 19:47:42.721151 containerd[1881]: time="2025-02-13T19:47:42.721058719Z" level=info msg="StartContainer for \"e7eaa4b28a3a6ea5a3456a662e0b190e29c0f1451fc1b9fd87bb8fd11d56de7b\" returns successfully" Feb 13 19:47:42.746351 systemd[1]: cri-containerd-e7eaa4b28a3a6ea5a3456a662e0b190e29c0f1451fc1b9fd87bb8fd11d56de7b.scope: Deactivated successfully. Feb 13 19:47:42.791746 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7eaa4b28a3a6ea5a3456a662e0b190e29c0f1451fc1b9fd87bb8fd11d56de7b-rootfs.mount: Deactivated successfully. Feb 13 19:47:42.804289 containerd[1881]: time="2025-02-13T19:47:42.804140530Z" level=info msg="shim disconnected" id=e7eaa4b28a3a6ea5a3456a662e0b190e29c0f1451fc1b9fd87bb8fd11d56de7b namespace=k8s.io Feb 13 19:47:42.804289 containerd[1881]: time="2025-02-13T19:47:42.804289097Z" level=warning msg="cleaning up after shim disconnected" id=e7eaa4b28a3a6ea5a3456a662e0b190e29c0f1451fc1b9fd87bb8fd11d56de7b namespace=k8s.io Feb 13 19:47:42.804607 containerd[1881]: time="2025-02-13T19:47:42.804303574Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:47:42.829140 containerd[1881]: time="2025-02-13T19:47:42.829045416Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:47:42Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:47:43.560707 containerd[1881]: time="2025-02-13T19:47:43.560511345Z" level=info msg="CreateContainer within sandbox \"5219c19bc3790776edfd8b3f85dc3ba0830e9b712030aed228de8d6e76650c03\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:47:43.596841 containerd[1881]: time="2025-02-13T19:47:43.596767078Z" level=info msg="CreateContainer within sandbox \"5219c19bc3790776edfd8b3f85dc3ba0830e9b712030aed228de8d6e76650c03\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3058eed40eacb43320dda39f6771b19d18fba7fec6862f7845cfdf3673d9a61f\"" Feb 13 19:47:43.599545 containerd[1881]: time="2025-02-13T19:47:43.597796109Z" level=info msg="StartContainer for \"3058eed40eacb43320dda39f6771b19d18fba7fec6862f7845cfdf3673d9a61f\"" Feb 13 19:47:43.668100 systemd[1]: Started cri-containerd-3058eed40eacb43320dda39f6771b19d18fba7fec6862f7845cfdf3673d9a61f.scope - libcontainer container 3058eed40eacb43320dda39f6771b19d18fba7fec6862f7845cfdf3673d9a61f. Feb 13 19:47:43.711678 systemd[1]: cri-containerd-3058eed40eacb43320dda39f6771b19d18fba7fec6862f7845cfdf3673d9a61f.scope: Deactivated successfully. Feb 13 19:47:43.717920 containerd[1881]: time="2025-02-13T19:47:43.717483499Z" level=info msg="StartContainer for \"3058eed40eacb43320dda39f6771b19d18fba7fec6862f7845cfdf3673d9a61f\" returns successfully" Feb 13 19:47:43.749984 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3058eed40eacb43320dda39f6771b19d18fba7fec6862f7845cfdf3673d9a61f-rootfs.mount: Deactivated successfully. Feb 13 19:47:43.761968 containerd[1881]: time="2025-02-13T19:47:43.761690489Z" level=info msg="shim disconnected" id=3058eed40eacb43320dda39f6771b19d18fba7fec6862f7845cfdf3673d9a61f namespace=k8s.io Feb 13 19:47:43.761968 containerd[1881]: time="2025-02-13T19:47:43.761759385Z" level=warning msg="cleaning up after shim disconnected" id=3058eed40eacb43320dda39f6771b19d18fba7fec6862f7845cfdf3673d9a61f namespace=k8s.io Feb 13 19:47:43.761968 containerd[1881]: time="2025-02-13T19:47:43.761769789Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:47:44.570587 containerd[1881]: time="2025-02-13T19:47:44.570534825Z" level=info msg="CreateContainer within sandbox \"5219c19bc3790776edfd8b3f85dc3ba0830e9b712030aed228de8d6e76650c03\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:47:44.631862 containerd[1881]: time="2025-02-13T19:47:44.631500240Z" level=info msg="CreateContainer within sandbox \"5219c19bc3790776edfd8b3f85dc3ba0830e9b712030aed228de8d6e76650c03\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b44930e0bd7c8f6c8a292a02235714b05f7cdef4695340af892229b486150c22\"" Feb 13 19:47:44.641802 containerd[1881]: time="2025-02-13T19:47:44.636483767Z" level=info msg="StartContainer for \"b44930e0bd7c8f6c8a292a02235714b05f7cdef4695340af892229b486150c22\"" Feb 13 19:47:44.777175 systemd[1]: Started cri-containerd-b44930e0bd7c8f6c8a292a02235714b05f7cdef4695340af892229b486150c22.scope - libcontainer container b44930e0bd7c8f6c8a292a02235714b05f7cdef4695340af892229b486150c22. Feb 13 19:47:44.846287 containerd[1881]: time="2025-02-13T19:47:44.845350013Z" level=info msg="StartContainer for \"b44930e0bd7c8f6c8a292a02235714b05f7cdef4695340af892229b486150c22\" returns successfully" Feb 13 19:47:45.937872 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 19:47:46.453363 containerd[1881]: time="2025-02-13T19:47:46.453056870Z" level=info msg="StopPodSandbox for \"4fc44529e8006271372add233fd6e6a17f88f3333e1be8786c1c5a3d8f96b099\"" Feb 13 19:47:46.453985 containerd[1881]: time="2025-02-13T19:47:46.453424845Z" level=info msg="TearDown network for sandbox \"4fc44529e8006271372add233fd6e6a17f88f3333e1be8786c1c5a3d8f96b099\" successfully" Feb 13 19:47:46.453985 containerd[1881]: time="2025-02-13T19:47:46.453444908Z" level=info msg="StopPodSandbox for \"4fc44529e8006271372add233fd6e6a17f88f3333e1be8786c1c5a3d8f96b099\" returns successfully" Feb 13 19:47:46.488052 containerd[1881]: time="2025-02-13T19:47:46.487999279Z" level=info msg="RemovePodSandbox for \"4fc44529e8006271372add233fd6e6a17f88f3333e1be8786c1c5a3d8f96b099\"" Feb 13 19:47:46.488052 containerd[1881]: time="2025-02-13T19:47:46.488060555Z" level=info msg="Forcibly stopping sandbox \"4fc44529e8006271372add233fd6e6a17f88f3333e1be8786c1c5a3d8f96b099\"" Feb 13 19:47:46.488443 containerd[1881]: time="2025-02-13T19:47:46.488150392Z" level=info msg="TearDown network for sandbox \"4fc44529e8006271372add233fd6e6a17f88f3333e1be8786c1c5a3d8f96b099\" successfully" Feb 13 19:47:46.495637 containerd[1881]: time="2025-02-13T19:47:46.495578803Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4fc44529e8006271372add233fd6e6a17f88f3333e1be8786c1c5a3d8f96b099\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:47:46.495893 containerd[1881]: time="2025-02-13T19:47:46.495661588Z" level=info msg="RemovePodSandbox \"4fc44529e8006271372add233fd6e6a17f88f3333e1be8786c1c5a3d8f96b099\" returns successfully" Feb 13 19:47:46.496470 containerd[1881]: time="2025-02-13T19:47:46.496436777Z" level=info msg="StopPodSandbox for \"ff9052e6263d5772efd8c429ef3f36c7a4ab5aba8f11cbfdd5a97037796c2a3a\"" Feb 13 19:47:46.496599 containerd[1881]: time="2025-02-13T19:47:46.496548916Z" level=info msg="TearDown network for sandbox \"ff9052e6263d5772efd8c429ef3f36c7a4ab5aba8f11cbfdd5a97037796c2a3a\" successfully" Feb 13 19:47:46.496599 containerd[1881]: time="2025-02-13T19:47:46.496564899Z" level=info msg="StopPodSandbox for \"ff9052e6263d5772efd8c429ef3f36c7a4ab5aba8f11cbfdd5a97037796c2a3a\" returns successfully" Feb 13 19:47:46.496933 containerd[1881]: time="2025-02-13T19:47:46.496901994Z" level=info msg="RemovePodSandbox for \"ff9052e6263d5772efd8c429ef3f36c7a4ab5aba8f11cbfdd5a97037796c2a3a\"" Feb 13 19:47:46.496933 containerd[1881]: time="2025-02-13T19:47:46.496929480Z" level=info msg="Forcibly stopping sandbox \"ff9052e6263d5772efd8c429ef3f36c7a4ab5aba8f11cbfdd5a97037796c2a3a\"" Feb 13 19:47:46.497100 containerd[1881]: time="2025-02-13T19:47:46.496987217Z" level=info msg="TearDown network for sandbox \"ff9052e6263d5772efd8c429ef3f36c7a4ab5aba8f11cbfdd5a97037796c2a3a\" successfully" Feb 13 19:47:46.503564 containerd[1881]: time="2025-02-13T19:47:46.503508445Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ff9052e6263d5772efd8c429ef3f36c7a4ab5aba8f11cbfdd5a97037796c2a3a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:47:46.503731 containerd[1881]: time="2025-02-13T19:47:46.503582779Z" level=info msg="RemovePodSandbox \"ff9052e6263d5772efd8c429ef3f36c7a4ab5aba8f11cbfdd5a97037796c2a3a\" returns successfully" Feb 13 19:47:48.302513 systemd[1]: run-containerd-runc-k8s.io-b44930e0bd7c8f6c8a292a02235714b05f7cdef4695340af892229b486150c22-runc.jFLQxa.mount: Deactivated successfully. Feb 13 19:47:50.134832 (udev-worker)[6046]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:47:50.137056 systemd-networkd[1717]: lxc_health: Link UP Feb 13 19:47:50.142322 systemd-networkd[1717]: lxc_health: Gained carrier Feb 13 19:47:50.144321 (udev-worker)[6047]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:47:50.845367 kubelet[3394]: I0213 19:47:50.845282 3394 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-c7whx" podStartSLOduration=10.845253722 podStartE2EDuration="10.845253722s" podCreationTimestamp="2025-02-13 19:47:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:47:45.627089269 +0000 UTC m=+119.519566907" watchObservedRunningTime="2025-02-13 19:47:50.845253722 +0000 UTC m=+124.737731402" Feb 13 19:47:51.503219 systemd-networkd[1717]: lxc_health: Gained IPv6LL Feb 13 19:47:53.913701 ntpd[1864]: Listen normally on 14 lxc_health [fe80::1853:d9ff:fe3f:fcff%14]:123 Feb 13 19:47:53.915109 ntpd[1864]: 13 Feb 19:47:53 ntpd[1864]: Listen normally on 14 lxc_health [fe80::1853:d9ff:fe3f:fcff%14]:123 Feb 13 19:47:55.567378 sshd[5264]: Connection closed by 139.178.89.65 port 52240 Feb 13 19:47:55.571481 sshd-session[5222]: pam_unix(sshd:session): session closed for user core Feb 13 19:47:55.577371 systemd[1]: sshd@27-172.31.19.60:22-139.178.89.65:52240.service: Deactivated successfully. Feb 13 19:47:55.582720 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 19:47:55.585755 systemd-logind[1873]: Session 28 logged out. Waiting for processes to exit. Feb 13 19:47:55.588879 systemd-logind[1873]: Removed session 28. Feb 13 19:48:09.475346 kubelet[3394]: E0213 19:48:09.475047 3394 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-60?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 19:48:10.696054 systemd[1]: cri-containerd-ff6e8910a01fd90074ebd38c2a73424088930759304472993a03af3b7d4dfe3a.scope: Deactivated successfully. Feb 13 19:48:10.696476 systemd[1]: cri-containerd-ff6e8910a01fd90074ebd38c2a73424088930759304472993a03af3b7d4dfe3a.scope: Consumed 4.170s CPU time, 25.2M memory peak, 0B memory swap peak. Feb 13 19:48:10.730058 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff6e8910a01fd90074ebd38c2a73424088930759304472993a03af3b7d4dfe3a-rootfs.mount: Deactivated successfully. Feb 13 19:48:10.764025 containerd[1881]: time="2025-02-13T19:48:10.763921635Z" level=info msg="shim disconnected" id=ff6e8910a01fd90074ebd38c2a73424088930759304472993a03af3b7d4dfe3a namespace=k8s.io Feb 13 19:48:10.764025 containerd[1881]: time="2025-02-13T19:48:10.764015969Z" level=warning msg="cleaning up after shim disconnected" id=ff6e8910a01fd90074ebd38c2a73424088930759304472993a03af3b7d4dfe3a namespace=k8s.io Feb 13 19:48:10.764025 containerd[1881]: time="2025-02-13T19:48:10.764028259Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:48:11.652928 kubelet[3394]: I0213 19:48:11.652878 3394 scope.go:117] "RemoveContainer" containerID="ff6e8910a01fd90074ebd38c2a73424088930759304472993a03af3b7d4dfe3a" Feb 13 19:48:11.661918 containerd[1881]: time="2025-02-13T19:48:11.660925184Z" level=info msg="CreateContainer within sandbox \"fb2bb1a311a751d4ab8a5701e5412d648de56e2e5196b6c8e5e306635a6bda84\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 13 19:48:11.684662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1241760733.mount: Deactivated successfully. Feb 13 19:48:11.694496 containerd[1881]: time="2025-02-13T19:48:11.694445720Z" level=info msg="CreateContainer within sandbox \"fb2bb1a311a751d4ab8a5701e5412d648de56e2e5196b6c8e5e306635a6bda84\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"dd617c3e4770228c715710bd7d7aedeb8ced157e642e3389ca82d24a71490a4b\"" Feb 13 19:48:11.695220 containerd[1881]: time="2025-02-13T19:48:11.695183215Z" level=info msg="StartContainer for \"dd617c3e4770228c715710bd7d7aedeb8ced157e642e3389ca82d24a71490a4b\"" Feb 13 19:48:11.742051 systemd[1]: Started cri-containerd-dd617c3e4770228c715710bd7d7aedeb8ced157e642e3389ca82d24a71490a4b.scope - libcontainer container dd617c3e4770228c715710bd7d7aedeb8ced157e642e3389ca82d24a71490a4b. Feb 13 19:48:11.802958 containerd[1881]: time="2025-02-13T19:48:11.802674960Z" level=info msg="StartContainer for \"dd617c3e4770228c715710bd7d7aedeb8ced157e642e3389ca82d24a71490a4b\" returns successfully" Feb 13 19:48:13.851330 systemd[1]: cri-containerd-f44560c6e4d6f135ae2b035dfae753c887c133682825deb7db8e8d5b317a7cb5.scope: Deactivated successfully. Feb 13 19:48:13.851688 systemd[1]: cri-containerd-f44560c6e4d6f135ae2b035dfae753c887c133682825deb7db8e8d5b317a7cb5.scope: Consumed 1.561s CPU time, 16.4M memory peak, 0B memory swap peak. Feb 13 19:48:13.922437 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f44560c6e4d6f135ae2b035dfae753c887c133682825deb7db8e8d5b317a7cb5-rootfs.mount: Deactivated successfully. Feb 13 19:48:13.948505 containerd[1881]: time="2025-02-13T19:48:13.948259173Z" level=info msg="shim disconnected" id=f44560c6e4d6f135ae2b035dfae753c887c133682825deb7db8e8d5b317a7cb5 namespace=k8s.io Feb 13 19:48:13.948505 containerd[1881]: time="2025-02-13T19:48:13.948494276Z" level=warning msg="cleaning up after shim disconnected" id=f44560c6e4d6f135ae2b035dfae753c887c133682825deb7db8e8d5b317a7cb5 namespace=k8s.io Feb 13 19:48:13.948505 containerd[1881]: time="2025-02-13T19:48:13.948515977Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:48:14.677913 kubelet[3394]: I0213 19:48:14.677873 3394 scope.go:117] "RemoveContainer" containerID="f44560c6e4d6f135ae2b035dfae753c887c133682825deb7db8e8d5b317a7cb5" Feb 13 19:48:14.682868 containerd[1881]: time="2025-02-13T19:48:14.682639231Z" level=info msg="CreateContainer within sandbox \"9f2d8d68f1f15deaab479971cb6eadbd0e852541f895948b7d2c05e20bfcee88\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 13 19:48:14.727433 containerd[1881]: time="2025-02-13T19:48:14.727375923Z" level=info msg="CreateContainer within sandbox \"9f2d8d68f1f15deaab479971cb6eadbd0e852541f895948b7d2c05e20bfcee88\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"af36b21f1c89ea9b6fbb73c303261d723f1603b91c1d6679e17179fef8e30fdc\"" Feb 13 19:48:14.728169 containerd[1881]: time="2025-02-13T19:48:14.728122440Z" level=info msg="StartContainer for \"af36b21f1c89ea9b6fbb73c303261d723f1603b91c1d6679e17179fef8e30fdc\"" Feb 13 19:48:14.806082 systemd[1]: Started cri-containerd-af36b21f1c89ea9b6fbb73c303261d723f1603b91c1d6679e17179fef8e30fdc.scope - libcontainer container af36b21f1c89ea9b6fbb73c303261d723f1603b91c1d6679e17179fef8e30fdc. Feb 13 19:48:14.863855 containerd[1881]: time="2025-02-13T19:48:14.863743004Z" level=info msg="StartContainer for \"af36b21f1c89ea9b6fbb73c303261d723f1603b91c1d6679e17179fef8e30fdc\" returns successfully" Feb 13 19:48:19.476491 kubelet[3394]: E0213 19:48:19.476345 3394 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-60?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 19:48:29.483828 kubelet[3394]: E0213 19:48:29.483740 3394 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-60?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"