Jan 13 20:38:33.080837 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 18:58:40 -00 2025 Jan 13 20:38:33.080881 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:38:33.080897 kernel: BIOS-provided physical RAM map: Jan 13 20:38:33.080910 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 20:38:33.080920 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 20:38:33.080929 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 20:38:33.080946 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jan 13 20:38:33.080958 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jan 13 20:38:33.080970 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jan 13 20:38:33.080982 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 20:38:33.080995 kernel: NX (Execute Disable) protection: active Jan 13 20:38:33.081007 kernel: APIC: Static calls initialized Jan 13 20:38:33.081019 kernel: SMBIOS 2.7 present. Jan 13 20:38:33.081031 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 13 20:38:33.081050 kernel: Hypervisor detected: KVM Jan 13 20:38:33.081064 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 20:38:33.081078 kernel: kvm-clock: using sched offset of 9028740674 cycles Jan 13 20:38:33.081092 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 20:38:33.081106 kernel: tsc: Detected 2499.996 MHz processor Jan 13 20:38:33.081120 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 20:38:33.081134 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 20:38:33.081151 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jan 13 20:38:33.081165 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 20:38:33.081179 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 20:38:33.081193 kernel: Using GB pages for direct mapping Jan 13 20:38:33.081206 kernel: ACPI: Early table checksum verification disabled Jan 13 20:38:33.081219 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jan 13 20:38:33.081234 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jan 13 20:38:33.083713 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 13 20:38:33.083736 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 13 20:38:33.083757 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jan 13 20:38:33.083772 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 13 20:38:33.083786 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 13 20:38:33.083801 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 13 20:38:33.083813 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 13 20:38:33.083828 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 13 20:38:33.083842 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 13 20:38:33.083856 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 13 20:38:33.083871 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jan 13 20:38:33.083889 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jan 13 20:38:33.083910 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jan 13 20:38:33.083925 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jan 13 20:38:33.083992 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jan 13 20:38:33.084010 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jan 13 20:38:33.084029 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jan 13 20:38:33.084043 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jan 13 20:38:33.084058 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jan 13 20:38:33.084073 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jan 13 20:38:33.084088 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 13 20:38:33.084103 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 13 20:38:33.084117 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 13 20:38:33.084132 kernel: NUMA: Initialized distance table, cnt=1 Jan 13 20:38:33.084146 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jan 13 20:38:33.084165 kernel: Zone ranges: Jan 13 20:38:33.084179 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 20:38:33.084194 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jan 13 20:38:33.084208 kernel: Normal empty Jan 13 20:38:33.084223 kernel: Movable zone start for each node Jan 13 20:38:33.084237 kernel: Early memory node ranges Jan 13 20:38:33.084267 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 20:38:33.084281 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jan 13 20:38:33.084296 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jan 13 20:38:33.084311 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 20:38:33.084330 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 20:38:33.084345 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jan 13 20:38:33.084455 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 13 20:38:33.084473 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 20:38:33.084488 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 13 20:38:33.084502 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 20:38:33.084516 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 20:38:33.084528 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 20:38:33.084541 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 20:38:33.084567 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 20:38:33.084585 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 20:38:33.084603 kernel: TSC deadline timer available Jan 13 20:38:33.084621 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 13 20:38:33.084636 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 20:38:33.084649 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 13 20:38:33.084663 kernel: Booting paravirtualized kernel on KVM Jan 13 20:38:33.084678 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 20:38:33.084692 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 13 20:38:33.084711 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 13 20:38:33.084725 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 13 20:38:33.084740 kernel: pcpu-alloc: [0] 0 1 Jan 13 20:38:33.084753 kernel: kvm-guest: PV spinlocks enabled Jan 13 20:38:33.084768 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 20:38:33.084785 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:38:33.084800 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:38:33.084814 kernel: random: crng init done Jan 13 20:38:33.084831 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:38:33.084845 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 13 20:38:33.084859 kernel: Fallback order for Node 0: 0 Jan 13 20:38:33.084874 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jan 13 20:38:33.084888 kernel: Policy zone: DMA32 Jan 13 20:38:33.084902 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:38:33.084917 kernel: Memory: 1930300K/2057760K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 127200K reserved, 0K cma-reserved) Jan 13 20:38:33.084931 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 20:38:33.084946 kernel: Kernel/User page tables isolation: enabled Jan 13 20:38:33.084964 kernel: ftrace: allocating 37890 entries in 149 pages Jan 13 20:38:33.084978 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 20:38:33.084992 kernel: Dynamic Preempt: voluntary Jan 13 20:38:33.085007 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:38:33.085022 kernel: rcu: RCU event tracing is enabled. Jan 13 20:38:33.085037 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 20:38:33.085052 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:38:33.085066 kernel: Rude variant of Tasks RCU enabled. Jan 13 20:38:33.085080 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:38:33.085097 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:38:33.085112 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 20:38:33.085126 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 13 20:38:33.085140 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:38:33.085154 kernel: Console: colour VGA+ 80x25 Jan 13 20:38:33.085168 kernel: printk: console [ttyS0] enabled Jan 13 20:38:33.085182 kernel: ACPI: Core revision 20230628 Jan 13 20:38:33.085197 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 13 20:38:33.085211 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 20:38:33.085228 kernel: x2apic enabled Jan 13 20:38:33.087673 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 20:38:33.087731 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 13 20:38:33.087751 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Jan 13 20:38:33.087766 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 13 20:38:33.087782 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 13 20:38:33.087797 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 20:38:33.087812 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 20:38:33.087826 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 20:38:33.087841 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 20:38:33.087856 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 13 20:38:33.087871 kernel: RETBleed: Vulnerable Jan 13 20:38:33.087887 kernel: Speculative Store Bypass: Vulnerable Jan 13 20:38:33.087904 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 20:38:33.087920 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 20:38:33.087996 kernel: GDS: Unknown: Dependent on hypervisor status Jan 13 20:38:33.088014 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 20:38:33.088029 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 20:38:33.088045 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 20:38:33.088064 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 13 20:38:33.088078 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 13 20:38:33.088093 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 13 20:38:33.088108 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 13 20:38:33.088123 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 13 20:38:33.088138 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 13 20:38:33.088153 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 20:38:33.088167 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 13 20:38:33.088182 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 13 20:38:33.088196 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 13 20:38:33.088211 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 13 20:38:33.088229 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 13 20:38:33.089384 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 13 20:38:33.089404 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 13 20:38:33.089435 kernel: Freeing SMP alternatives memory: 32K Jan 13 20:38:33.089450 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:38:33.089464 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:38:33.089477 kernel: landlock: Up and running. Jan 13 20:38:33.089491 kernel: SELinux: Initializing. Jan 13 20:38:33.089504 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 20:38:33.089518 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 20:38:33.089531 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 13 20:38:33.089550 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:38:33.089564 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:38:33.089577 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:38:33.089591 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 13 20:38:33.089605 kernel: signal: max sigframe size: 3632 Jan 13 20:38:33.089619 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:38:33.089633 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:38:33.089646 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 13 20:38:33.089660 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:38:33.089676 kernel: smpboot: x86: Booting SMP configuration: Jan 13 20:38:33.089689 kernel: .... node #0, CPUs: #1 Jan 13 20:38:33.089703 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 13 20:38:33.089717 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 13 20:38:33.089731 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 20:38:33.089743 kernel: smpboot: Max logical packages: 1 Jan 13 20:38:33.089756 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Jan 13 20:38:33.089769 kernel: devtmpfs: initialized Jan 13 20:38:33.089783 kernel: x86/mm: Memory block size: 128MB Jan 13 20:38:33.089799 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:38:33.089812 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 20:38:33.089825 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:38:33.089838 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:38:33.089852 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:38:33.089865 kernel: audit: type=2000 audit(1736800711.862:1): state=initialized audit_enabled=0 res=1 Jan 13 20:38:33.089879 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:38:33.089892 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 20:38:33.089908 kernel: cpuidle: using governor menu Jan 13 20:38:33.089921 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:38:33.089934 kernel: dca service started, version 1.12.1 Jan 13 20:38:33.089947 kernel: PCI: Using configuration type 1 for base access Jan 13 20:38:33.089960 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 20:38:33.089973 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:38:33.089986 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:38:33.089999 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:38:33.090012 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:38:33.090029 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:38:33.090042 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:38:33.090055 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:38:33.090068 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:38:33.090081 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 13 20:38:33.090093 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 20:38:33.090106 kernel: ACPI: Interpreter enabled Jan 13 20:38:33.090120 kernel: ACPI: PM: (supports S0 S5) Jan 13 20:38:33.090132 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 20:38:33.090146 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 20:38:33.090162 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 20:38:33.090175 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 13 20:38:33.090188 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:38:33.090809 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:38:33.091861 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 13 20:38:33.092079 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 13 20:38:33.092102 kernel: acpiphp: Slot [3] registered Jan 13 20:38:33.092124 kernel: acpiphp: Slot [4] registered Jan 13 20:38:33.092139 kernel: acpiphp: Slot [5] registered Jan 13 20:38:33.092154 kernel: acpiphp: Slot [6] registered Jan 13 20:38:33.092169 kernel: acpiphp: Slot [7] registered Jan 13 20:38:33.092184 kernel: acpiphp: Slot [8] registered Jan 13 20:38:33.092198 kernel: acpiphp: Slot [9] registered Jan 13 20:38:33.092214 kernel: acpiphp: Slot [10] registered Jan 13 20:38:33.092229 kernel: acpiphp: Slot [11] registered Jan 13 20:38:33.093366 kernel: acpiphp: Slot [12] registered Jan 13 20:38:33.093391 kernel: acpiphp: Slot [13] registered Jan 13 20:38:33.093405 kernel: acpiphp: Slot [14] registered Jan 13 20:38:33.093418 kernel: acpiphp: Slot [15] registered Jan 13 20:38:33.093431 kernel: acpiphp: Slot [16] registered Jan 13 20:38:33.093444 kernel: acpiphp: Slot [17] registered Jan 13 20:38:33.093457 kernel: acpiphp: Slot [18] registered Jan 13 20:38:33.093470 kernel: acpiphp: Slot [19] registered Jan 13 20:38:33.093483 kernel: acpiphp: Slot [20] registered Jan 13 20:38:33.093496 kernel: acpiphp: Slot [21] registered Jan 13 20:38:33.093599 kernel: acpiphp: Slot [22] registered Jan 13 20:38:33.093620 kernel: acpiphp: Slot [23] registered Jan 13 20:38:33.093634 kernel: acpiphp: Slot [24] registered Jan 13 20:38:33.093647 kernel: acpiphp: Slot [25] registered Jan 13 20:38:33.093660 kernel: acpiphp: Slot [26] registered Jan 13 20:38:33.093674 kernel: acpiphp: Slot [27] registered Jan 13 20:38:33.093687 kernel: acpiphp: Slot [28] registered Jan 13 20:38:33.093700 kernel: acpiphp: Slot [29] registered Jan 13 20:38:33.093715 kernel: acpiphp: Slot [30] registered Jan 13 20:38:33.093728 kernel: acpiphp: Slot [31] registered Jan 13 20:38:33.093746 kernel: PCI host bridge to bus 0000:00 Jan 13 20:38:33.093916 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 20:38:33.094040 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 20:38:33.094157 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 20:38:33.096767 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 13 20:38:33.096975 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:38:33.097134 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 13 20:38:33.098436 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 13 20:38:33.099386 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 13 20:38:33.099710 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 13 20:38:33.099928 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jan 13 20:38:33.100134 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 13 20:38:33.100472 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 13 20:38:33.100629 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 13 20:38:33.101021 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 13 20:38:33.101203 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 13 20:38:33.104491 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 13 20:38:33.104786 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x180 took 11718 usecs Jan 13 20:38:33.105035 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 13 20:38:33.105185 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jan 13 20:38:33.107921 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 13 20:38:33.108132 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 20:38:33.108290 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 13 20:38:33.108489 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jan 13 20:38:33.108629 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 13 20:38:33.108761 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jan 13 20:38:33.108778 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 20:38:33.108801 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 20:38:33.108815 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 20:38:33.108828 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 20:38:33.108842 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 13 20:38:33.108856 kernel: iommu: Default domain type: Translated Jan 13 20:38:33.108870 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 20:38:33.108883 kernel: PCI: Using ACPI for IRQ routing Jan 13 20:38:33.108897 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 20:38:33.108911 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 20:38:33.108926 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jan 13 20:38:33.109049 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 13 20:38:33.110727 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 13 20:38:33.111004 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 20:38:33.111057 kernel: vgaarb: loaded Jan 13 20:38:33.111076 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 13 20:38:33.111091 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 13 20:38:33.111107 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 20:38:33.111154 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:38:33.111179 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:38:33.111225 kernel: pnp: PnP ACPI init Jan 13 20:38:33.114136 kernel: pnp: PnP ACPI: found 5 devices Jan 13 20:38:33.114166 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 20:38:33.114934 kernel: NET: Registered PF_INET protocol family Jan 13 20:38:33.114961 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:38:33.115016 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 13 20:38:33.115034 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:38:33.115091 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 20:38:33.115111 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 13 20:38:33.115127 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 13 20:38:33.115176 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 20:38:33.115196 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 20:38:33.115286 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:38:33.115306 kernel: NET: Registered PF_XDP protocol family Jan 13 20:38:33.115631 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 20:38:33.115843 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 20:38:33.116125 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 20:38:33.116734 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 13 20:38:33.117003 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 13 20:38:33.117028 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:38:33.117045 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 13 20:38:33.117097 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 13 20:38:33.117114 kernel: clocksource: Switched to clocksource tsc Jan 13 20:38:33.122352 kernel: Initialise system trusted keyrings Jan 13 20:38:33.122380 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 13 20:38:33.122397 kernel: Key type asymmetric registered Jan 13 20:38:33.122413 kernel: Asymmetric key parser 'x509' registered Jan 13 20:38:33.122428 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 20:38:33.122444 kernel: io scheduler mq-deadline registered Jan 13 20:38:33.122460 kernel: io scheduler kyber registered Jan 13 20:38:33.122475 kernel: io scheduler bfq registered Jan 13 20:38:33.122491 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 20:38:33.122507 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:38:33.122587 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 20:38:33.122608 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 20:38:33.122625 kernel: i8042: Warning: Keylock active Jan 13 20:38:33.122640 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 20:38:33.122656 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 20:38:33.122866 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 13 20:38:33.123000 kernel: rtc_cmos 00:00: registered as rtc0 Jan 13 20:38:33.123129 kernel: rtc_cmos 00:00: setting system clock to 2025-01-13T20:38:32 UTC (1736800712) Jan 13 20:38:33.123286 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 13 20:38:33.123307 kernel: intel_pstate: CPU model not supported Jan 13 20:38:33.123323 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:38:33.123339 kernel: Segment Routing with IPv6 Jan 13 20:38:33.123355 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:38:33.123370 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:38:33.123386 kernel: Key type dns_resolver registered Jan 13 20:38:33.123401 kernel: IPI shorthand broadcast: enabled Jan 13 20:38:33.123417 kernel: sched_clock: Marking stable (779005663, 296334621)->(1196316554, -120976270) Jan 13 20:38:33.123437 kernel: registered taskstats version 1 Jan 13 20:38:33.123453 kernel: Loading compiled-in X.509 certificates Jan 13 20:38:33.123468 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: ede78b3e719729f95eaaf7cb6a5289b567f6ee3e' Jan 13 20:38:33.123483 kernel: Key type .fscrypt registered Jan 13 20:38:33.123498 kernel: Key type fscrypt-provisioning registered Jan 13 20:38:33.123514 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:38:33.123530 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:38:33.123545 kernel: ima: No architecture policies found Jan 13 20:38:33.123571 kernel: clk: Disabling unused clocks Jan 13 20:38:33.123587 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 13 20:38:33.123658 kernel: Write protecting the kernel read-only data: 38912k Jan 13 20:38:33.123678 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 13 20:38:33.123694 kernel: Run /init as init process Jan 13 20:38:33.123709 kernel: with arguments: Jan 13 20:38:33.123725 kernel: /init Jan 13 20:38:33.123740 kernel: with environment: Jan 13 20:38:33.123756 kernel: HOME=/ Jan 13 20:38:33.123772 kernel: TERM=linux Jan 13 20:38:33.123792 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:38:33.123839 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:38:33.123859 systemd[1]: Detected virtualization amazon. Jan 13 20:38:33.123877 systemd[1]: Detected architecture x86-64. Jan 13 20:38:33.123893 systemd[1]: Running in initrd. Jan 13 20:38:33.123911 systemd[1]: No hostname configured, using default hostname. Jan 13 20:38:33.123927 systemd[1]: Hostname set to . Jan 13 20:38:33.123946 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:38:33.123963 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:38:33.123980 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:38:33.123998 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:38:33.124082 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:38:33.124099 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:38:33.124117 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:38:33.124175 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:38:33.124196 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:38:33.124213 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:38:33.124230 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:38:33.124264 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:38:33.124282 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:38:33.124299 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:38:33.124320 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:38:33.124337 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:38:33.124354 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:38:33.124372 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:38:33.124390 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:38:33.124407 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:38:33.124453 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:38:33.124471 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:38:33.124488 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:38:33.124509 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:38:33.124526 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:38:33.124543 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 20:38:33.124561 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:38:33.124579 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:38:33.124601 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:38:33.124619 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:38:33.124636 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:38:33.124689 systemd-journald[179]: Collecting audit messages is disabled. Jan 13 20:38:33.124731 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:38:33.124749 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:38:33.124767 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:38:33.124784 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:38:33.124802 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:38:33.124824 systemd-journald[179]: Journal started Jan 13 20:38:33.124980 systemd-journald[179]: Runtime Journal (/run/log/journal/ec22f2f8efa8428c4e29107669108a3f) is 4.8M, max 38.5M, 33.7M free. Jan 13 20:38:33.129276 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:38:33.103600 systemd-modules-load[180]: Inserted module 'overlay' Jan 13 20:38:33.136630 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:38:33.289751 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:38:33.289787 kernel: Bridge firewalling registered Jan 13 20:38:33.161285 systemd-modules-load[180]: Inserted module 'br_netfilter' Jan 13 20:38:33.286033 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:38:33.286977 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:38:33.293920 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:38:33.301705 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:38:33.324432 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:38:33.357303 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:38:33.362543 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:38:33.364054 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:38:33.376520 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:38:33.377107 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:38:33.394850 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:38:33.416708 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:38:33.437735 dracut-cmdline[216]: dracut-dracut-053 Jan 13 20:38:33.442722 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:38:33.467502 systemd-resolved[209]: Positive Trust Anchors: Jan 13 20:38:33.467524 systemd-resolved[209]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:38:33.467586 systemd-resolved[209]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:38:33.482025 systemd-resolved[209]: Defaulting to hostname 'linux'. Jan 13 20:38:33.485157 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:38:33.486867 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:38:33.535272 kernel: SCSI subsystem initialized Jan 13 20:38:33.547284 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:38:33.558276 kernel: iscsi: registered transport (tcp) Jan 13 20:38:33.581281 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:38:33.581359 kernel: QLogic iSCSI HBA Driver Jan 13 20:38:33.628994 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:38:33.645062 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:38:33.692382 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:38:33.692468 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:38:33.692491 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:38:33.751305 kernel: raid6: avx512x4 gen() 6092 MB/s Jan 13 20:38:33.768326 kernel: raid6: avx512x2 gen() 5944 MB/s Jan 13 20:38:33.785303 kernel: raid6: avx512x1 gen() 12336 MB/s Jan 13 20:38:33.802307 kernel: raid6: avx2x4 gen() 14526 MB/s Jan 13 20:38:33.819303 kernel: raid6: avx2x2 gen() 12410 MB/s Jan 13 20:38:33.837116 kernel: raid6: avx2x1 gen() 9547 MB/s Jan 13 20:38:33.837194 kernel: raid6: using algorithm avx2x4 gen() 14526 MB/s Jan 13 20:38:33.854780 kernel: raid6: .... xor() 4420 MB/s, rmw enabled Jan 13 20:38:33.854865 kernel: raid6: using avx512x2 recovery algorithm Jan 13 20:38:33.886279 kernel: xor: automatically using best checksumming function avx Jan 13 20:38:34.151272 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:38:34.169222 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:38:34.179494 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:38:34.214436 systemd-udevd[399]: Using default interface naming scheme 'v255'. Jan 13 20:38:34.226735 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:38:34.237128 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:38:34.262027 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Jan 13 20:38:34.301959 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:38:34.310695 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:38:34.372445 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:38:34.390436 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:38:34.463654 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:38:34.466311 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:38:34.472750 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:38:34.474387 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:38:34.481467 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:38:34.512877 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:38:34.558844 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 20:38:34.566201 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 13 20:38:34.566510 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 13 20:38:34.571278 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 13 20:38:34.590120 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 13 20:38:34.590481 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 13 20:38:34.590670 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 13 20:38:34.590942 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:b5:50:91:f4:05 Jan 13 20:38:34.572009 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:38:34.572870 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:38:34.574693 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:38:34.607186 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:38:34.607229 kernel: GPT:9289727 != 16777215 Jan 13 20:38:34.607264 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:38:34.607283 kernel: GPT:9289727 != 16777215 Jan 13 20:38:34.607300 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:38:34.607322 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:38:34.577177 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:38:34.577429 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:38:34.579552 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:38:34.606951 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:38:34.623889 (udev-worker)[461]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:38:34.646463 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 20:38:34.646506 kernel: AES CTR mode by8 optimization enabled Jan 13 20:38:34.827645 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (463) Jan 13 20:38:34.835290 kernel: BTRFS: device fsid 7f507843-6957-466b-8fb7-5bee228b170a devid 1 transid 44 /dev/nvme0n1p3 scanned by (udev-worker) (458) Jan 13 20:38:34.885931 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:38:34.896517 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:38:34.944231 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:38:34.953870 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 13 20:38:34.961235 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 13 20:38:34.961383 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 13 20:38:34.971948 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 13 20:38:34.978777 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 20:38:34.993480 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:38:35.003647 disk-uuid[632]: Primary Header is updated. Jan 13 20:38:35.003647 disk-uuid[632]: Secondary Entries is updated. Jan 13 20:38:35.003647 disk-uuid[632]: Secondary Header is updated. Jan 13 20:38:35.013294 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:38:36.040444 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:38:36.045972 disk-uuid[633]: The operation has completed successfully. Jan 13 20:38:36.208731 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:38:36.208860 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:38:36.237664 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:38:36.252737 sh[893]: Success Jan 13 20:38:36.286321 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 13 20:38:36.407633 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:38:36.429387 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:38:36.438004 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:38:36.461786 kernel: BTRFS info (device dm-0): first mount of filesystem 7f507843-6957-466b-8fb7-5bee228b170a Jan 13 20:38:36.461860 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:38:36.461880 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:38:36.464001 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:38:36.464638 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:38:36.569270 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 20:38:36.600768 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:38:36.612059 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:38:36.623771 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:38:36.637561 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:38:36.665465 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:38:36.665536 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:38:36.665556 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:38:36.674944 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:38:36.690299 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:38:36.691572 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:38:36.713294 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:38:36.724471 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:38:36.771974 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:38:36.785202 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:38:36.859523 systemd-networkd[1085]: lo: Link UP Jan 13 20:38:36.859534 systemd-networkd[1085]: lo: Gained carrier Jan 13 20:38:36.861688 systemd-networkd[1085]: Enumeration completed Jan 13 20:38:36.861804 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:38:36.867947 systemd-networkd[1085]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:38:36.867953 systemd-networkd[1085]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:38:36.877228 systemd[1]: Reached target network.target - Network. Jan 13 20:38:36.883149 systemd-networkd[1085]: eth0: Link UP Jan 13 20:38:36.883154 systemd-networkd[1085]: eth0: Gained carrier Jan 13 20:38:36.883172 systemd-networkd[1085]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:38:36.899348 systemd-networkd[1085]: eth0: DHCPv4 address 172.31.21.52/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 20:38:37.154453 ignition[1030]: Ignition 2.20.0 Jan 13 20:38:37.154467 ignition[1030]: Stage: fetch-offline Jan 13 20:38:37.154700 ignition[1030]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:38:37.154714 ignition[1030]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:38:37.155024 ignition[1030]: Ignition finished successfully Jan 13 20:38:37.161259 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:38:37.174492 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 20:38:37.213362 ignition[1095]: Ignition 2.20.0 Jan 13 20:38:37.213378 ignition[1095]: Stage: fetch Jan 13 20:38:37.213845 ignition[1095]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:38:37.213860 ignition[1095]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:38:37.214192 ignition[1095]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:38:37.243446 ignition[1095]: PUT result: OK Jan 13 20:38:37.246953 ignition[1095]: parsed url from cmdline: "" Jan 13 20:38:37.246966 ignition[1095]: no config URL provided Jan 13 20:38:37.246975 ignition[1095]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:38:37.246988 ignition[1095]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:38:37.247009 ignition[1095]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:38:37.249237 ignition[1095]: PUT result: OK Jan 13 20:38:37.249315 ignition[1095]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 13 20:38:37.252332 ignition[1095]: GET result: OK Jan 13 20:38:37.252427 ignition[1095]: parsing config with SHA512: 3fe7588c4eb387daff7ebe341fe339d68513169deadce6c20780c39d0ee09be82d6c1e8ea435632a85a0c4683c14e27e72031766135773e8945d937426126273 Jan 13 20:38:37.260995 unknown[1095]: fetched base config from "system" Jan 13 20:38:37.261011 unknown[1095]: fetched base config from "system" Jan 13 20:38:37.261018 unknown[1095]: fetched user config from "aws" Jan 13 20:38:37.262691 ignition[1095]: fetch: fetch complete Jan 13 20:38:37.266255 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 20:38:37.262697 ignition[1095]: fetch: fetch passed Jan 13 20:38:37.262753 ignition[1095]: Ignition finished successfully Jan 13 20:38:37.277585 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:38:37.302147 ignition[1101]: Ignition 2.20.0 Jan 13 20:38:37.302182 ignition[1101]: Stage: kargs Jan 13 20:38:37.303639 ignition[1101]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:38:37.303655 ignition[1101]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:38:37.304000 ignition[1101]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:38:37.305900 ignition[1101]: PUT result: OK Jan 13 20:38:37.314131 ignition[1101]: kargs: kargs passed Jan 13 20:38:37.314213 ignition[1101]: Ignition finished successfully Jan 13 20:38:37.317906 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:38:37.324868 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:38:37.393469 ignition[1107]: Ignition 2.20.0 Jan 13 20:38:37.393493 ignition[1107]: Stage: disks Jan 13 20:38:37.394401 ignition[1107]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:38:37.394429 ignition[1107]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:38:37.394597 ignition[1107]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:38:37.396046 ignition[1107]: PUT result: OK Jan 13 20:38:37.405528 ignition[1107]: disks: disks passed Jan 13 20:38:37.405603 ignition[1107]: Ignition finished successfully Jan 13 20:38:37.408511 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:38:37.412694 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:38:37.412809 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:38:37.413145 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:38:37.413353 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:38:37.413534 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:38:37.425627 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:38:37.468028 systemd-fsck[1116]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 20:38:37.475657 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:38:37.483569 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:38:37.695266 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 59ba8ffc-e6b0-4bb4-a36e-13a47bd6ad99 r/w with ordered data mode. Quota mode: none. Jan 13 20:38:37.697367 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:38:37.699030 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:38:37.717735 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:38:37.730703 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:38:37.735347 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:38:37.735431 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:38:37.735472 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:38:37.762013 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:38:37.765270 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1135) Jan 13 20:38:37.769503 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:38:37.769572 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:38:37.769593 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:38:37.772969 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:38:37.781514 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:38:37.789497 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:38:37.987593 systemd-networkd[1085]: eth0: Gained IPv6LL Jan 13 20:38:38.165140 initrd-setup-root[1159]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:38:38.191274 initrd-setup-root[1166]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:38:38.216146 initrd-setup-root[1173]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:38:38.231599 initrd-setup-root[1180]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:38:38.620722 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:38:38.632970 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:38:38.637709 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:38:38.662268 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:38:38.662787 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:38:38.713261 ignition[1248]: INFO : Ignition 2.20.0 Jan 13 20:38:38.713261 ignition[1248]: INFO : Stage: mount Jan 13 20:38:38.716991 ignition[1248]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:38:38.716991 ignition[1248]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:38:38.716991 ignition[1248]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:38:38.734171 ignition[1248]: INFO : PUT result: OK Jan 13 20:38:38.738815 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:38:38.742429 ignition[1248]: INFO : mount: mount passed Jan 13 20:38:38.743377 ignition[1248]: INFO : Ignition finished successfully Jan 13 20:38:38.745750 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:38:38.754666 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:38:38.772483 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:38:38.787423 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1260) Jan 13 20:38:38.787476 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:38:38.789969 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:38:38.790023 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:38:38.795309 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:38:38.797750 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:38:38.826169 ignition[1277]: INFO : Ignition 2.20.0 Jan 13 20:38:38.826169 ignition[1277]: INFO : Stage: files Jan 13 20:38:38.831145 ignition[1277]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:38:38.831145 ignition[1277]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:38:38.833700 ignition[1277]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:38:38.835847 ignition[1277]: INFO : PUT result: OK Jan 13 20:38:38.838825 ignition[1277]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:38:38.840841 ignition[1277]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:38:38.840841 ignition[1277]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:38:38.862988 ignition[1277]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:38:38.864864 ignition[1277]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:38:38.866723 unknown[1277]: wrote ssh authorized keys file for user: core Jan 13 20:38:38.867972 ignition[1277]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:38:38.870786 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:38:38.872916 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 20:38:38.986287 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 20:38:39.126408 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:38:39.126408 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:38:39.131694 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 13 20:38:39.445309 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 20:38:39.564395 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:38:39.567267 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:38:39.567267 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:38:39.567267 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:38:39.567267 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:38:39.567267 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:38:39.567267 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:38:39.567267 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:38:39.567267 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:38:39.567267 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:38:39.567267 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:38:39.567267 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 20:38:39.567267 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 20:38:39.567267 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 20:38:39.567267 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 13 20:38:39.945518 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 13 20:38:40.358987 ignition[1277]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 20:38:40.358987 ignition[1277]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 13 20:38:40.370551 ignition[1277]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:38:40.375824 ignition[1277]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:38:40.375824 ignition[1277]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 13 20:38:40.375824 ignition[1277]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:38:40.375824 ignition[1277]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:38:40.385709 ignition[1277]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:38:40.385709 ignition[1277]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:38:40.385709 ignition[1277]: INFO : files: files passed Jan 13 20:38:40.385709 ignition[1277]: INFO : Ignition finished successfully Jan 13 20:38:40.391274 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:38:40.401503 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:38:40.406402 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:38:40.410508 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:38:40.410827 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:38:40.437581 initrd-setup-root-after-ignition[1310]: grep: Jan 13 20:38:40.438982 initrd-setup-root-after-ignition[1306]: grep: Jan 13 20:38:40.442625 initrd-setup-root-after-ignition[1310]: /sysroot/etc/flatcar/enabled-sysext.conf Jan 13 20:38:40.441567 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:38:40.450826 initrd-setup-root-after-ignition[1306]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:38:40.450826 initrd-setup-root-after-ignition[1310]: : No such file or directory Jan 13 20:38:40.445474 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:38:40.468127 initrd-setup-root-after-ignition[1306]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:38:40.471389 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:38:40.511442 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:38:40.512723 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:38:40.517474 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:38:40.522234 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:38:40.522443 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:38:40.534454 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:38:40.556696 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:38:40.577759 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:38:40.610479 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:38:40.610834 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:38:40.620215 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:38:40.621533 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:38:40.621772 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:38:40.628069 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:38:40.629361 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:38:40.631850 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:38:40.636560 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:38:40.642198 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:38:40.645752 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:38:40.648725 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:38:40.651373 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:38:40.654047 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:38:40.656910 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:38:40.659056 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:38:40.659183 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:38:40.665692 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:38:40.667348 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:38:40.669823 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:38:40.671208 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:38:40.674069 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:38:40.674196 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:38:40.683889 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:38:40.684094 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:38:40.697270 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:38:40.697501 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:38:40.728713 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:38:40.734820 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:38:40.738484 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:38:40.738876 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:38:40.743653 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:38:40.744695 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:38:40.756214 ignition[1330]: INFO : Ignition 2.20.0 Jan 13 20:38:40.756214 ignition[1330]: INFO : Stage: umount Jan 13 20:38:40.756214 ignition[1330]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:38:40.756214 ignition[1330]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:38:40.756214 ignition[1330]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:38:40.766881 ignition[1330]: INFO : PUT result: OK Jan 13 20:38:40.760554 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:38:40.760681 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:38:40.772102 ignition[1330]: INFO : umount: umount passed Jan 13 20:38:40.772102 ignition[1330]: INFO : Ignition finished successfully Jan 13 20:38:40.774440 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:38:40.774772 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:38:40.778161 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:38:40.778315 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:38:40.781950 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:38:40.782019 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:38:40.786554 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 20:38:40.786756 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 20:38:40.795295 systemd[1]: Stopped target network.target - Network. Jan 13 20:38:40.796524 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:38:40.796598 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:38:40.797861 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:38:40.799368 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:38:40.804775 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:38:40.807692 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:38:40.810587 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:38:40.812677 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:38:40.814417 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:38:40.815790 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:38:40.815839 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:38:40.817238 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:38:40.817306 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:38:40.818963 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:38:40.819017 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:38:40.821545 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:38:40.827602 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:38:40.831315 systemd-networkd[1085]: eth0: DHCPv6 lease lost Jan 13 20:38:40.841067 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:38:40.841734 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:38:40.841901 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:38:40.843742 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:38:40.843931 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:38:40.852996 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:38:40.853054 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:38:40.867832 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:38:40.875578 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:38:40.875674 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:38:40.883627 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:38:40.883694 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:38:40.885238 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:38:40.885326 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:38:40.897485 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:38:40.897659 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:38:40.901766 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:38:40.943097 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:38:40.950212 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:38:40.959669 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:38:40.959862 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:38:40.971814 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:38:40.971897 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:38:40.975196 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:38:40.975281 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:38:40.978164 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:38:40.978330 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:38:40.984039 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:38:40.985860 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:38:40.990547 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:38:40.991004 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:38:41.002788 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:38:41.002878 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:38:41.015476 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:38:41.017097 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:38:41.017233 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:38:41.017545 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 20:38:41.017600 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:38:41.021850 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:38:41.021923 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:38:41.024770 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:38:41.024838 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:38:41.031719 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:38:41.031845 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:38:41.043225 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:38:41.043370 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:38:41.059507 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:38:41.068253 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:38:41.107806 systemd[1]: Switching root. Jan 13 20:38:41.141852 systemd-journald[179]: Journal stopped Jan 13 20:38:44.110492 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Jan 13 20:38:44.110588 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:38:44.110620 kernel: SELinux: policy capability open_perms=1 Jan 13 20:38:44.110644 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:38:44.110667 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:38:44.110687 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:38:44.110708 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:38:44.110737 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:38:44.110772 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:38:44.110793 kernel: audit: type=1403 audit(1736800722.439:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:38:44.110825 systemd[1]: Successfully loaded SELinux policy in 78.666ms. Jan 13 20:38:44.110861 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.534ms. Jan 13 20:38:44.110886 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:38:44.110910 systemd[1]: Detected virtualization amazon. Jan 13 20:38:44.110933 systemd[1]: Detected architecture x86-64. Jan 13 20:38:44.110955 systemd[1]: Detected first boot. Jan 13 20:38:44.110984 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:38:44.111009 zram_generator::config[1372]: No configuration found. Jan 13 20:38:44.111036 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:38:44.111060 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:38:44.111085 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:38:44.111109 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:38:44.111141 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:38:44.111166 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:38:44.111194 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:38:44.111217 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:38:44.112279 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:38:44.112317 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:38:44.112337 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:38:44.112357 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:38:44.112394 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:38:44.112413 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:38:44.112432 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:38:44.112456 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:38:44.112473 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:38:44.112492 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:38:44.112511 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 20:38:44.112530 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:38:44.112549 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:38:44.112567 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:38:44.112587 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:38:44.112612 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:38:44.112634 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:38:44.112656 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:38:44.112678 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:38:44.112700 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:38:44.112722 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:38:44.112744 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:38:44.112765 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:38:44.112791 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:38:44.112812 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:38:44.112833 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:38:44.112855 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:38:44.112880 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:38:44.112900 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:38:44.112921 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:38:44.112942 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:38:44.112962 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:38:44.112988 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:38:44.113010 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:38:44.113029 systemd[1]: Reached target machines.target - Containers. Jan 13 20:38:44.113049 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:38:44.113069 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:38:44.113089 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:38:44.113109 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:38:44.113129 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:38:44.113149 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:38:44.113173 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:38:44.113198 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:38:44.113218 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:38:44.113255 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:38:44.113325 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:38:44.113419 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:38:44.113441 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:38:44.113462 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:38:44.113486 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:38:44.113505 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:38:44.113885 kernel: loop: module loaded Jan 13 20:38:44.113915 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:38:44.113936 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:38:44.113957 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:38:44.113977 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:38:44.113997 systemd[1]: Stopped verity-setup.service. Jan 13 20:38:44.114020 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:38:44.114576 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:38:44.114612 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:38:44.114632 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:38:44.114652 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:38:44.114673 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:38:44.114698 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:38:44.114719 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:38:44.114740 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:38:44.114759 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:38:44.114779 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:38:44.114800 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:38:44.114820 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:38:44.114840 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:38:44.114865 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:38:44.114886 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:38:44.114911 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:38:44.114932 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:38:44.114953 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:38:44.114974 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:38:44.114999 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:38:44.115020 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:38:44.115041 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:38:44.115063 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:38:44.115085 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:38:44.115106 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:38:44.115126 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:38:44.115148 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:38:44.115171 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:38:44.115193 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:38:44.115214 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:38:44.115235 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:38:44.115288 kernel: ACPI: bus type drm_connector registered Jan 13 20:38:44.115312 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:38:44.115334 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:38:44.115356 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:38:44.115382 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:38:44.115405 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:38:44.115426 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:38:44.116339 systemd-journald[1454]: Collecting audit messages is disabled. Jan 13 20:38:44.116388 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:38:44.116408 kernel: fuse: init (API version 7.39) Jan 13 20:38:44.116428 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:38:44.116448 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:38:44.116467 kernel: loop0: detected capacity change from 0 to 210664 Jan 13 20:38:44.116487 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:38:44.116507 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:38:44.116525 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:38:44.116545 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:38:44.116570 systemd-journald[1454]: Journal started Jan 13 20:38:44.116607 systemd-journald[1454]: Runtime Journal (/run/log/journal/ec22f2f8efa8428c4e29107669108a3f) is 4.8M, max 38.5M, 33.7M free. Jan 13 20:38:43.341456 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:38:43.372136 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 13 20:38:44.118775 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:38:43.372547 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:38:44.133525 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:38:44.147563 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:38:44.186763 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:38:44.196196 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:38:44.201205 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:38:44.211407 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:38:44.265132 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:38:44.284749 kernel: loop1: detected capacity change from 0 to 62848 Jan 13 20:38:44.281743 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:38:44.290731 systemd-tmpfiles[1480]: ACLs are not supported, ignoring. Jan 13 20:38:44.290759 systemd-tmpfiles[1480]: ACLs are not supported, ignoring. Jan 13 20:38:44.305592 udevadm[1513]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 20:38:44.318044 systemd-journald[1454]: Time spent on flushing to /var/log/journal/ec22f2f8efa8428c4e29107669108a3f is 69.177ms for 981 entries. Jan 13 20:38:44.318044 systemd-journald[1454]: System Journal (/var/log/journal/ec22f2f8efa8428c4e29107669108a3f) is 8.0M, max 195.6M, 187.6M free. Jan 13 20:38:44.396799 systemd-journald[1454]: Received client request to flush runtime journal. Jan 13 20:38:44.396862 kernel: loop2: detected capacity change from 0 to 141000 Jan 13 20:38:44.323883 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:38:44.337846 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:38:44.399488 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:38:44.451018 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:38:44.461480 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:38:44.510956 systemd-tmpfiles[1522]: ACLs are not supported, ignoring. Jan 13 20:38:44.511205 systemd-tmpfiles[1522]: ACLs are not supported, ignoring. Jan 13 20:38:44.524774 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:38:44.541277 kernel: loop3: detected capacity change from 0 to 138184 Jan 13 20:38:44.673587 kernel: loop4: detected capacity change from 0 to 210664 Jan 13 20:38:44.691272 kernel: loop5: detected capacity change from 0 to 62848 Jan 13 20:38:44.707499 kernel: loop6: detected capacity change from 0 to 141000 Jan 13 20:38:44.740284 kernel: loop7: detected capacity change from 0 to 138184 Jan 13 20:38:44.776413 (sd-merge)[1527]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 13 20:38:44.778288 (sd-merge)[1527]: Merged extensions into '/usr'. Jan 13 20:38:44.786290 systemd[1]: Reloading requested from client PID 1478 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:38:44.786465 systemd[1]: Reloading... Jan 13 20:38:45.001276 zram_generator::config[1550]: No configuration found. Jan 13 20:38:45.343546 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:38:45.503171 systemd[1]: Reloading finished in 715 ms. Jan 13 20:38:45.536356 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:38:45.554421 systemd[1]: Starting ensure-sysext.service... Jan 13 20:38:45.570679 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:38:45.607016 systemd[1]: Reloading requested from client PID 1601 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:38:45.607293 systemd[1]: Reloading... Jan 13 20:38:45.668543 systemd-tmpfiles[1602]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:38:45.670487 systemd-tmpfiles[1602]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:38:45.677729 systemd-tmpfiles[1602]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:38:45.698421 systemd-tmpfiles[1602]: ACLs are not supported, ignoring. Jan 13 20:38:45.698764 systemd-tmpfiles[1602]: ACLs are not supported, ignoring. Jan 13 20:38:45.724015 systemd-tmpfiles[1602]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:38:45.725467 systemd-tmpfiles[1602]: Skipping /boot Jan 13 20:38:45.746754 systemd-tmpfiles[1602]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:38:45.746918 systemd-tmpfiles[1602]: Skipping /boot Jan 13 20:38:45.796266 zram_generator::config[1630]: No configuration found. Jan 13 20:38:46.051856 ldconfig[1472]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:38:46.082463 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:38:46.162357 systemd[1]: Reloading finished in 553 ms. Jan 13 20:38:46.180450 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:38:46.182295 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:38:46.191439 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:38:46.215756 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:38:46.230729 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:38:46.245086 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:38:46.253489 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:38:46.270482 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:38:46.284292 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:38:46.306014 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:38:46.306501 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:38:46.314795 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:38:46.321364 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:38:46.331708 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:38:46.333195 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:38:46.340640 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:38:46.342350 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:38:46.349415 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:38:46.349735 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:38:46.349977 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:38:46.350186 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:38:46.377836 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:38:46.384853 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:38:46.386413 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:38:46.395767 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:38:46.397851 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:38:46.398184 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:38:46.424722 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:38:46.427149 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:38:46.430944 systemd[1]: Finished ensure-sysext.service. Jan 13 20:38:46.436124 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:38:46.436443 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:38:46.439469 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:38:46.439735 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:38:46.441040 systemd-udevd[1692]: Using default interface naming scheme 'v255'. Jan 13 20:38:46.463690 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:38:46.469834 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:38:46.473327 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:38:46.488463 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:38:46.489899 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:38:46.492386 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:38:46.492578 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:38:46.496543 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:38:46.509628 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:38:46.511827 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:38:46.522870 augenrules[1723]: No rules Jan 13 20:38:46.523511 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:38:46.523746 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:38:46.533619 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:38:46.562584 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:38:46.576642 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:38:46.787577 systemd-resolved[1687]: Positive Trust Anchors: Jan 13 20:38:46.787611 systemd-resolved[1687]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:38:46.787677 systemd-resolved[1687]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:38:46.800852 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 20:38:46.805663 systemd-resolved[1687]: Defaulting to hostname 'linux'. Jan 13 20:38:46.809074 (udev-worker)[1744]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:38:46.810021 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:38:46.812433 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:38:46.818717 systemd-networkd[1739]: lo: Link UP Jan 13 20:38:46.818726 systemd-networkd[1739]: lo: Gained carrier Jan 13 20:38:46.820614 systemd-networkd[1739]: Enumeration completed Jan 13 20:38:46.820741 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:38:46.822677 systemd[1]: Reached target network.target - Network. Jan 13 20:38:46.831447 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:38:46.889435 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jan 13 20:38:46.910603 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 13 20:38:46.927385 kernel: ACPI: button: Power Button [PWRF] Jan 13 20:38:46.928055 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jan 13 20:38:46.933270 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Jan 13 20:38:46.940808 systemd-networkd[1739]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:38:46.940822 systemd-networkd[1739]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:38:46.944007 systemd-networkd[1739]: eth0: Link UP Jan 13 20:38:46.944206 systemd-networkd[1739]: eth0: Gained carrier Jan 13 20:38:46.944234 systemd-networkd[1739]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:38:46.950138 kernel: ACPI: button: Sleep Button [SLPF] Jan 13 20:38:46.955429 systemd-networkd[1739]: eth0: DHCPv4 address 172.31.21.52/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 20:38:46.983126 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:38:46.993189 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 20:38:46.999290 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 44 scanned by (udev-worker) (1744) Jan 13 20:38:47.151279 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 20:38:47.251714 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:38:47.258462 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:38:47.267035 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:38:47.272055 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:38:47.287119 lvm[1851]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:38:47.297809 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:38:47.321485 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:38:47.323369 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:38:47.324887 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:38:47.326542 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:38:47.328299 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:38:47.330197 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:38:47.331856 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:38:47.333703 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:38:47.335722 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:38:47.335809 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:38:47.337214 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:38:47.342146 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:38:47.351571 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:38:47.365447 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:38:47.369090 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:38:47.372981 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:38:47.375309 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:38:47.376930 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:38:47.378536 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:38:47.378581 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:38:47.396437 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:38:47.403523 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 20:38:47.409089 lvm[1859]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:38:47.411823 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:38:47.417420 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:38:47.431732 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:38:47.434892 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:38:47.451236 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:38:47.469525 systemd[1]: Started ntpd.service - Network Time Service. Jan 13 20:38:47.475351 jq[1863]: false Jan 13 20:38:47.476654 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:38:47.485496 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 13 20:38:47.489754 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:38:47.504403 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:38:47.536481 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:38:47.539931 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:38:47.542310 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:38:47.556926 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:38:47.563351 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:38:47.566615 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:38:47.582982 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:38:47.583420 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:38:47.623839 extend-filesystems[1864]: Found loop4 Jan 13 20:38:47.623839 extend-filesystems[1864]: Found loop5 Jan 13 20:38:47.623839 extend-filesystems[1864]: Found loop6 Jan 13 20:38:47.623839 extend-filesystems[1864]: Found loop7 Jan 13 20:38:47.623839 extend-filesystems[1864]: Found nvme0n1 Jan 13 20:38:47.623839 extend-filesystems[1864]: Found nvme0n1p9 Jan 13 20:38:47.623839 extend-filesystems[1864]: Checking size of /dev/nvme0n1p9 Jan 13 20:38:47.644627 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:38:47.655329 jq[1876]: true Jan 13 20:38:47.644881 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:38:47.681686 ntpd[1866]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 18:25:52 UTC 2025 (1): Starting Jan 13 20:38:47.683764 ntpd[1866]: 13 Jan 20:38:47 ntpd[1866]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 18:25:52 UTC 2025 (1): Starting Jan 13 20:38:47.683764 ntpd[1866]: 13 Jan 20:38:47 ntpd[1866]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 20:38:47.683764 ntpd[1866]: 13 Jan 20:38:47 ntpd[1866]: ---------------------------------------------------- Jan 13 20:38:47.683764 ntpd[1866]: 13 Jan 20:38:47 ntpd[1866]: ntp-4 is maintained by Network Time Foundation, Jan 13 20:38:47.683764 ntpd[1866]: 13 Jan 20:38:47 ntpd[1866]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 20:38:47.683764 ntpd[1866]: 13 Jan 20:38:47 ntpd[1866]: corporation. Support and training for ntp-4 are Jan 13 20:38:47.683764 ntpd[1866]: 13 Jan 20:38:47 ntpd[1866]: available at https://www.nwtime.org/support Jan 13 20:38:47.683764 ntpd[1866]: 13 Jan 20:38:47 ntpd[1866]: ---------------------------------------------------- Jan 13 20:38:47.681725 ntpd[1866]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 20:38:47.681736 ntpd[1866]: ---------------------------------------------------- Jan 13 20:38:47.681746 ntpd[1866]: ntp-4 is maintained by Network Time Foundation, Jan 13 20:38:47.681756 ntpd[1866]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 20:38:47.681766 ntpd[1866]: corporation. Support and training for ntp-4 are Jan 13 20:38:47.681776 ntpd[1866]: available at https://www.nwtime.org/support Jan 13 20:38:47.681786 ntpd[1866]: ---------------------------------------------------- Jan 13 20:38:47.693217 ntpd[1866]: proto: precision = 0.098 usec (-23) Jan 13 20:38:47.701698 (ntainerd)[1892]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:38:47.728977 jq[1895]: true Jan 13 20:38:47.729107 ntpd[1866]: 13 Jan 20:38:47 ntpd[1866]: proto: precision = 0.098 usec (-23) Jan 13 20:38:47.735124 ntpd[1866]: basedate set to 2025-01-01 Jan 13 20:38:47.742103 ntpd[1866]: 13 Jan 20:38:47 ntpd[1866]: basedate set to 2025-01-01 Jan 13 20:38:47.742103 ntpd[1866]: 13 Jan 20:38:47 ntpd[1866]: gps base set to 2025-01-05 (week 2348) Jan 13 20:38:47.735155 ntpd[1866]: gps base set to 2025-01-05 (week 2348) Jan 13 20:38:47.745198 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:38:47.747078 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:38:47.752002 tar[1883]: linux-amd64/helm Jan 13 20:38:47.754202 dbus-daemon[1862]: [system] SELinux support is enabled Jan 13 20:38:47.755048 ntpd[1866]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 20:38:47.765395 update_engine[1874]: I20250113 20:38:47.757825 1874 main.cc:92] Flatcar Update Engine starting Jan 13 20:38:47.765705 ntpd[1866]: 13 Jan 20:38:47 ntpd[1866]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 20:38:47.765705 ntpd[1866]: 13 Jan 20:38:47 ntpd[1866]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 20:38:47.756496 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:38:47.755109 ntpd[1866]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 20:38:47.760479 ntpd[1866]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 20:38:47.766947 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:38:47.774355 ntpd[1866]: Listen normally on 3 eth0 172.31.21.52:123 Jan 13 20:38:47.776533 ntpd[1866]: 13 Jan 20:38:47 ntpd[1866]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 20:38:47.776533 ntpd[1866]: 13 Jan 20:38:47 ntpd[1866]: Listen normally on 3 eth0 172.31.21.52:123 Jan 13 20:38:47.776533 ntpd[1866]: 13 Jan 20:38:47 ntpd[1866]: Listen normally on 4 lo [::1]:123 Jan 13 20:38:47.776533 ntpd[1866]: 13 Jan 20:38:47 ntpd[1866]: bind(21) AF_INET6 fe80::4b5:50ff:fe91:f405%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:38:47.776533 ntpd[1866]: 13 Jan 20:38:47 ntpd[1866]: unable to create socket on eth0 (5) for fe80::4b5:50ff:fe91:f405%2#123 Jan 13 20:38:47.776533 ntpd[1866]: 13 Jan 20:38:47 ntpd[1866]: failed to init interface for address fe80::4b5:50ff:fe91:f405%2 Jan 13 20:38:47.776533 ntpd[1866]: 13 Jan 20:38:47 ntpd[1866]: Listening on routing socket on fd #21 for interface updates Jan 13 20:38:47.766987 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:38:47.774411 ntpd[1866]: Listen normally on 4 lo [::1]:123 Jan 13 20:38:47.768744 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:38:47.774473 ntpd[1866]: bind(21) AF_INET6 fe80::4b5:50ff:fe91:f405%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:38:47.768774 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:38:47.774499 ntpd[1866]: unable to create socket on eth0 (5) for fe80::4b5:50ff:fe91:f405%2#123 Jan 13 20:38:47.774514 ntpd[1866]: failed to init interface for address fe80::4b5:50ff:fe91:f405%2 Jan 13 20:38:47.774554 ntpd[1866]: Listening on routing socket on fd #21 for interface updates Jan 13 20:38:47.784195 ntpd[1866]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:38:47.790144 ntpd[1866]: 13 Jan 20:38:47 ntpd[1866]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:38:47.790144 ntpd[1866]: 13 Jan 20:38:47 ntpd[1866]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:38:47.786756 ntpd[1866]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:38:47.790339 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 13 20:38:47.798071 extend-filesystems[1864]: Resized partition /dev/nvme0n1p9 Jan 13 20:38:47.810091 dbus-daemon[1862]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1739 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 13 20:38:47.810796 extend-filesystems[1916]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:38:47.819959 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 13 20:38:47.829565 update_engine[1874]: I20250113 20:38:47.828020 1874 update_check_scheduler.cc:74] Next update check in 9m43s Jan 13 20:38:47.829447 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 13 20:38:47.831448 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:38:47.855801 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:38:47.936292 coreos-metadata[1861]: Jan 13 20:38:47.929 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 20:38:47.957360 coreos-metadata[1861]: Jan 13 20:38:47.947 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 13 20:38:47.957360 coreos-metadata[1861]: Jan 13 20:38:47.948 INFO Fetch successful Jan 13 20:38:47.957360 coreos-metadata[1861]: Jan 13 20:38:47.948 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 13 20:38:47.957360 coreos-metadata[1861]: Jan 13 20:38:47.950 INFO Fetch successful Jan 13 20:38:47.957360 coreos-metadata[1861]: Jan 13 20:38:47.951 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 13 20:38:47.958493 coreos-metadata[1861]: Jan 13 20:38:47.958 INFO Fetch successful Jan 13 20:38:47.958493 coreos-metadata[1861]: Jan 13 20:38:47.958 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 13 20:38:47.961223 coreos-metadata[1861]: Jan 13 20:38:47.960 INFO Fetch successful Jan 13 20:38:47.961223 coreos-metadata[1861]: Jan 13 20:38:47.960 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 13 20:38:47.962533 coreos-metadata[1861]: Jan 13 20:38:47.962 INFO Fetch failed with 404: resource not found Jan 13 20:38:47.963676 coreos-metadata[1861]: Jan 13 20:38:47.963 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 13 20:38:47.964624 coreos-metadata[1861]: Jan 13 20:38:47.964 INFO Fetch successful Jan 13 20:38:47.964624 coreos-metadata[1861]: Jan 13 20:38:47.964 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 13 20:38:47.965478 coreos-metadata[1861]: Jan 13 20:38:47.965 INFO Fetch successful Jan 13 20:38:47.965478 coreos-metadata[1861]: Jan 13 20:38:47.965 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 13 20:38:47.966320 coreos-metadata[1861]: Jan 13 20:38:47.965 INFO Fetch successful Jan 13 20:38:47.966320 coreos-metadata[1861]: Jan 13 20:38:47.966 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 13 20:38:47.969350 coreos-metadata[1861]: Jan 13 20:38:47.969 INFO Fetch successful Jan 13 20:38:47.969350 coreos-metadata[1861]: Jan 13 20:38:47.969 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 13 20:38:47.980764 coreos-metadata[1861]: Jan 13 20:38:47.975 INFO Fetch successful Jan 13 20:38:47.986267 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 13 20:38:48.089449 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 44 scanned by (udev-worker) (1744) Jan 13 20:38:48.085997 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:38:48.089690 extend-filesystems[1916]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 13 20:38:48.089690 extend-filesystems[1916]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:38:48.089690 extend-filesystems[1916]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 13 20:38:48.086329 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:38:48.106629 extend-filesystems[1864]: Resized filesystem in /dev/nvme0n1p9 Jan 13 20:38:48.106629 extend-filesystems[1864]: Found nvme0n1p1 Jan 13 20:38:48.106629 extend-filesystems[1864]: Found nvme0n1p2 Jan 13 20:38:48.106629 extend-filesystems[1864]: Found nvme0n1p3 Jan 13 20:38:48.106629 extend-filesystems[1864]: Found usr Jan 13 20:38:48.106629 extend-filesystems[1864]: Found nvme0n1p4 Jan 13 20:38:48.106629 extend-filesystems[1864]: Found nvme0n1p6 Jan 13 20:38:48.106629 extend-filesystems[1864]: Found nvme0n1p7 Jan 13 20:38:48.137658 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:38:48.150233 bash[1940]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:38:48.167462 systemd[1]: Starting sshkeys.service... Jan 13 20:38:48.196547 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 20:38:48.199703 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:38:48.223619 systemd-logind[1872]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 20:38:48.224100 systemd-logind[1872]: Watching system buttons on /dev/input/event3 (Sleep Button) Jan 13 20:38:48.224314 systemd-logind[1872]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 20:38:48.226432 systemd-logind[1872]: New seat seat0. Jan 13 20:38:48.232931 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:38:48.326418 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 20:38:48.339465 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 20:38:48.355431 systemd-networkd[1739]: eth0: Gained IPv6LL Jan 13 20:38:48.360589 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:38:48.363804 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:38:48.373256 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 13 20:38:48.378582 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:38:48.386089 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:38:48.550674 coreos-metadata[1974]: Jan 13 20:38:48.550 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 20:38:48.550674 coreos-metadata[1974]: Jan 13 20:38:48.550 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 13 20:38:48.550674 coreos-metadata[1974]: Jan 13 20:38:48.550 INFO Fetch successful Jan 13 20:38:48.550674 coreos-metadata[1974]: Jan 13 20:38:48.550 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 20:38:48.550674 coreos-metadata[1974]: Jan 13 20:38:48.550 INFO Fetch successful Jan 13 20:38:48.553391 unknown[1974]: wrote ssh authorized keys file for user: core Jan 13 20:38:48.594971 dbus-daemon[1862]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 13 20:38:48.595845 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 13 20:38:48.599759 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:38:48.610659 dbus-daemon[1862]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1917 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 13 20:38:48.622658 systemd[1]: Starting polkit.service - Authorization Manager... Jan 13 20:38:48.669536 update-ssh-keys[2043]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:38:48.680755 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 20:38:48.692420 systemd[1]: Finished sshkeys.service. Jan 13 20:38:48.699475 polkitd[2059]: Started polkitd version 121 Jan 13 20:38:48.721971 amazon-ssm-agent[1993]: Initializing new seelog logger Jan 13 20:38:48.724546 amazon-ssm-agent[1993]: New Seelog Logger Creation Complete Jan 13 20:38:48.728451 amazon-ssm-agent[1993]: 2025/01/13 20:38:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:38:48.728451 amazon-ssm-agent[1993]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:38:48.728451 amazon-ssm-agent[1993]: 2025/01/13 20:38:48 processing appconfig overrides Jan 13 20:38:48.731499 amazon-ssm-agent[1993]: 2025/01/13 20:38:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:38:48.731499 amazon-ssm-agent[1993]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:38:48.731499 amazon-ssm-agent[1993]: 2025/01/13 20:38:48 processing appconfig overrides Jan 13 20:38:48.731678 amazon-ssm-agent[1993]: 2025/01/13 20:38:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:38:48.731678 amazon-ssm-agent[1993]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:38:48.732268 amazon-ssm-agent[1993]: 2025-01-13 20:38:48 INFO Proxy environment variables: Jan 13 20:38:48.732268 amazon-ssm-agent[1993]: 2025/01/13 20:38:48 processing appconfig overrides Jan 13 20:38:48.742271 amazon-ssm-agent[1993]: 2025/01/13 20:38:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:38:48.742271 amazon-ssm-agent[1993]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:38:48.742271 amazon-ssm-agent[1993]: 2025/01/13 20:38:48 processing appconfig overrides Jan 13 20:38:48.781763 polkitd[2059]: Loading rules from directory /etc/polkit-1/rules.d Jan 13 20:38:48.781853 polkitd[2059]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 13 20:38:48.788401 locksmithd[1918]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:38:48.795839 polkitd[2059]: Finished loading, compiling and executing 2 rules Jan 13 20:38:48.798312 dbus-daemon[1862]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 13 20:38:48.798522 systemd[1]: Started polkit.service - Authorization Manager. Jan 13 20:38:48.800169 polkitd[2059]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 13 20:38:48.841941 amazon-ssm-agent[1993]: 2025-01-13 20:38:48 INFO https_proxy: Jan 13 20:38:48.893174 systemd-hostnamed[1917]: Hostname set to (transient) Jan 13 20:38:48.893315 systemd-resolved[1687]: System hostname changed to 'ip-172-31-21-52'. Jan 13 20:38:48.944261 amazon-ssm-agent[1993]: 2025-01-13 20:38:48 INFO http_proxy: Jan 13 20:38:48.998827 containerd[1892]: time="2025-01-13T20:38:48.998720841Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:38:49.046750 amazon-ssm-agent[1993]: 2025-01-13 20:38:48 INFO no_proxy: Jan 13 20:38:49.145922 amazon-ssm-agent[1993]: 2025-01-13 20:38:48 INFO Checking if agent identity type OnPrem can be assumed Jan 13 20:38:49.154162 containerd[1892]: time="2025-01-13T20:38:49.154112176Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:38:49.161928 containerd[1892]: time="2025-01-13T20:38:49.161865322Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:38:49.162068 containerd[1892]: time="2025-01-13T20:38:49.162051838Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:38:49.162141 containerd[1892]: time="2025-01-13T20:38:49.162127343Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:38:49.163845 containerd[1892]: time="2025-01-13T20:38:49.163804091Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:38:49.167860 containerd[1892]: time="2025-01-13T20:38:49.164002203Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:38:49.167860 containerd[1892]: time="2025-01-13T20:38:49.164164108Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:38:49.167860 containerd[1892]: time="2025-01-13T20:38:49.164188295Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:38:49.167860 containerd[1892]: time="2025-01-13T20:38:49.164455129Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:38:49.167860 containerd[1892]: time="2025-01-13T20:38:49.164476325Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:38:49.167860 containerd[1892]: time="2025-01-13T20:38:49.164495705Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:38:49.167860 containerd[1892]: time="2025-01-13T20:38:49.164510536Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:38:49.167860 containerd[1892]: time="2025-01-13T20:38:49.164600136Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:38:49.171614 containerd[1892]: time="2025-01-13T20:38:49.170285199Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:38:49.171614 containerd[1892]: time="2025-01-13T20:38:49.170506233Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:38:49.171614 containerd[1892]: time="2025-01-13T20:38:49.170528753Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:38:49.171614 containerd[1892]: time="2025-01-13T20:38:49.170637518Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:38:49.171614 containerd[1892]: time="2025-01-13T20:38:49.170690798Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:38:49.187280 containerd[1892]: time="2025-01-13T20:38:49.185311564Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:38:49.187280 containerd[1892]: time="2025-01-13T20:38:49.185487323Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:38:49.187280 containerd[1892]: time="2025-01-13T20:38:49.185564067Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:38:49.187280 containerd[1892]: time="2025-01-13T20:38:49.185591652Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:38:49.187280 containerd[1892]: time="2025-01-13T20:38:49.185629373Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:38:49.187280 containerd[1892]: time="2025-01-13T20:38:49.185917019Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:38:49.187280 containerd[1892]: time="2025-01-13T20:38:49.186541443Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:38:49.187280 containerd[1892]: time="2025-01-13T20:38:49.186683058Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:38:49.187280 containerd[1892]: time="2025-01-13T20:38:49.186705282Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:38:49.187280 containerd[1892]: time="2025-01-13T20:38:49.186729491Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:38:49.187280 containerd[1892]: time="2025-01-13T20:38:49.186749019Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:38:49.187280 containerd[1892]: time="2025-01-13T20:38:49.186771480Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:38:49.187280 containerd[1892]: time="2025-01-13T20:38:49.186790277Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:38:49.187280 containerd[1892]: time="2025-01-13T20:38:49.186810138Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:38:49.190208 containerd[1892]: time="2025-01-13T20:38:49.186831750Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:38:49.190208 containerd[1892]: time="2025-01-13T20:38:49.186850998Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:38:49.190208 containerd[1892]: time="2025-01-13T20:38:49.186889146Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:38:49.190208 containerd[1892]: time="2025-01-13T20:38:49.186907831Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:38:49.190208 containerd[1892]: time="2025-01-13T20:38:49.186937364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:38:49.190208 containerd[1892]: time="2025-01-13T20:38:49.186957546Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:38:49.190208 containerd[1892]: time="2025-01-13T20:38:49.186976969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:38:49.190208 containerd[1892]: time="2025-01-13T20:38:49.187003259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:38:49.190208 containerd[1892]: time="2025-01-13T20:38:49.187021961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:38:49.190208 containerd[1892]: time="2025-01-13T20:38:49.187052473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:38:49.190208 containerd[1892]: time="2025-01-13T20:38:49.187070245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:38:49.190208 containerd[1892]: time="2025-01-13T20:38:49.187089260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:38:49.190208 containerd[1892]: time="2025-01-13T20:38:49.187108253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:38:49.190208 containerd[1892]: time="2025-01-13T20:38:49.187128524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:38:49.190958 containerd[1892]: time="2025-01-13T20:38:49.187146149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:38:49.190958 containerd[1892]: time="2025-01-13T20:38:49.187163379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:38:49.190958 containerd[1892]: time="2025-01-13T20:38:49.187181472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:38:49.190958 containerd[1892]: time="2025-01-13T20:38:49.187203299Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:38:49.190958 containerd[1892]: time="2025-01-13T20:38:49.187233005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:38:49.196543 containerd[1892]: time="2025-01-13T20:38:49.191671859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:38:49.196543 containerd[1892]: time="2025-01-13T20:38:49.191709620Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:38:49.196543 containerd[1892]: time="2025-01-13T20:38:49.194691186Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:38:49.196543 containerd[1892]: time="2025-01-13T20:38:49.194930301Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:38:49.196543 containerd[1892]: time="2025-01-13T20:38:49.194954264Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:38:49.196543 containerd[1892]: time="2025-01-13T20:38:49.194973919Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:38:49.196543 containerd[1892]: time="2025-01-13T20:38:49.194989848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:38:49.196543 containerd[1892]: time="2025-01-13T20:38:49.195009802Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:38:49.196543 containerd[1892]: time="2025-01-13T20:38:49.195035371Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:38:49.196543 containerd[1892]: time="2025-01-13T20:38:49.195058134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:38:49.197112 containerd[1892]: time="2025-01-13T20:38:49.195537981Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:38:49.197112 containerd[1892]: time="2025-01-13T20:38:49.195608930Z" level=info msg="Connect containerd service" Jan 13 20:38:49.197112 containerd[1892]: time="2025-01-13T20:38:49.195655158Z" level=info msg="using legacy CRI server" Jan 13 20:38:49.197112 containerd[1892]: time="2025-01-13T20:38:49.195664944Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:38:49.197112 containerd[1892]: time="2025-01-13T20:38:49.195833908Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:38:49.208632 containerd[1892]: time="2025-01-13T20:38:49.201812502Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:38:49.208632 containerd[1892]: time="2025-01-13T20:38:49.202273720Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:38:49.208632 containerd[1892]: time="2025-01-13T20:38:49.202329797Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:38:49.208632 containerd[1892]: time="2025-01-13T20:38:49.202380805Z" level=info msg="Start subscribing containerd event" Jan 13 20:38:49.208632 containerd[1892]: time="2025-01-13T20:38:49.202428883Z" level=info msg="Start recovering state" Jan 13 20:38:49.208632 containerd[1892]: time="2025-01-13T20:38:49.202508032Z" level=info msg="Start event monitor" Jan 13 20:38:49.208632 containerd[1892]: time="2025-01-13T20:38:49.202521106Z" level=info msg="Start snapshots syncer" Jan 13 20:38:49.208632 containerd[1892]: time="2025-01-13T20:38:49.202533513Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:38:49.208632 containerd[1892]: time="2025-01-13T20:38:49.202543650Z" level=info msg="Start streaming server" Jan 13 20:38:49.208632 containerd[1892]: time="2025-01-13T20:38:49.202616814Z" level=info msg="containerd successfully booted in 0.208853s" Jan 13 20:38:49.202745 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:38:49.244871 amazon-ssm-agent[1993]: 2025-01-13 20:38:48 INFO Checking if agent identity type EC2 can be assumed Jan 13 20:38:49.349145 amazon-ssm-agent[1993]: 2025-01-13 20:38:48 INFO Agent will take identity from EC2 Jan 13 20:38:49.388617 sshd_keygen[1900]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:38:49.440659 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:38:49.456730 amazon-ssm-agent[1993]: 2025-01-13 20:38:48 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:38:49.452783 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:38:49.481178 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:38:49.481452 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:38:49.492928 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:38:49.529034 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:38:49.541882 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:38:49.551830 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 20:38:49.557820 amazon-ssm-agent[1993]: 2025-01-13 20:38:48 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:38:49.553426 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:38:49.651999 amazon-ssm-agent[1993]: 2025-01-13 20:38:48 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:38:49.751696 amazon-ssm-agent[1993]: 2025-01-13 20:38:48 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 13 20:38:49.769637 amazon-ssm-agent[1993]: 2025-01-13 20:38:48 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 13 20:38:49.769637 amazon-ssm-agent[1993]: 2025-01-13 20:38:48 INFO [amazon-ssm-agent] Starting Core Agent Jan 13 20:38:49.769637 amazon-ssm-agent[1993]: 2025-01-13 20:38:48 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 13 20:38:49.769637 amazon-ssm-agent[1993]: 2025-01-13 20:38:48 INFO [Registrar] Starting registrar module Jan 13 20:38:49.769637 amazon-ssm-agent[1993]: 2025-01-13 20:38:48 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 13 20:38:49.769637 amazon-ssm-agent[1993]: 2025-01-13 20:38:49 INFO [EC2Identity] EC2 registration was successful. Jan 13 20:38:49.769637 amazon-ssm-agent[1993]: 2025-01-13 20:38:49 INFO [CredentialRefresher] credentialRefresher has started Jan 13 20:38:49.769637 amazon-ssm-agent[1993]: 2025-01-13 20:38:49 INFO [CredentialRefresher] Starting credentials refresher loop Jan 13 20:38:49.769637 amazon-ssm-agent[1993]: 2025-01-13 20:38:49 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 13 20:38:49.771877 tar[1883]: linux-amd64/LICENSE Jan 13 20:38:49.771877 tar[1883]: linux-amd64/README.md Jan 13 20:38:49.787579 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:38:49.850741 amazon-ssm-agent[1993]: 2025-01-13 20:38:49 INFO [CredentialRefresher] Next credential rotation will be in 31.62498780105 minutes Jan 13 20:38:49.927309 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:38:49.935800 systemd[1]: Started sshd@0-172.31.21.52:22-139.178.89.65:56432.service - OpenSSH per-connection server daemon (139.178.89.65:56432). Jan 13 20:38:50.207294 sshd[2105]: Accepted publickey for core from 139.178.89.65 port 56432 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:38:50.209632 sshd-session[2105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:38:50.212160 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:38:50.214762 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:38:50.216119 systemd[1]: Startup finished in 991ms (kernel) + 9.613s (initrd) + 7.851s (userspace) = 18.455s. Jan 13 20:38:50.244744 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:38:50.253464 systemd-logind[1872]: New session 1 of user core. Jan 13 20:38:50.256985 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:38:50.341332 (kubelet)[2112]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:38:50.376893 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:38:50.378123 agetty[2100]: failed to open credentials directory Jan 13 20:38:50.382930 agetty[2099]: failed to open credentials directory Jan 13 20:38:50.389773 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:38:50.406577 (systemd)[2117]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:38:50.570836 systemd[2117]: Queued start job for default target default.target. Jan 13 20:38:50.577344 systemd[2117]: Created slice app.slice - User Application Slice. Jan 13 20:38:50.577387 systemd[2117]: Reached target paths.target - Paths. Jan 13 20:38:50.577409 systemd[2117]: Reached target timers.target - Timers. Jan 13 20:38:50.580445 systemd[2117]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:38:50.597737 systemd[2117]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:38:50.597977 systemd[2117]: Reached target sockets.target - Sockets. Jan 13 20:38:50.598102 systemd[2117]: Reached target basic.target - Basic System. Jan 13 20:38:50.598434 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:38:50.600338 systemd[2117]: Reached target default.target - Main User Target. Jan 13 20:38:50.600414 systemd[2117]: Startup finished in 185ms. Jan 13 20:38:50.603517 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:38:50.682316 ntpd[1866]: Listen normally on 6 eth0 [fe80::4b5:50ff:fe91:f405%2]:123 Jan 13 20:38:50.682832 ntpd[1866]: 13 Jan 20:38:50 ntpd[1866]: Listen normally on 6 eth0 [fe80::4b5:50ff:fe91:f405%2]:123 Jan 13 20:38:50.763886 systemd[1]: Started sshd@1-172.31.21.52:22-139.178.89.65:40616.service - OpenSSH per-connection server daemon (139.178.89.65:40616). Jan 13 20:38:50.798623 amazon-ssm-agent[1993]: 2025-01-13 20:38:50 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 13 20:38:50.900055 amazon-ssm-agent[1993]: 2025-01-13 20:38:50 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2137) started Jan 13 20:38:50.954711 sshd[2134]: Accepted publickey for core from 139.178.89.65 port 40616 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:38:50.956946 sshd-session[2134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:38:50.963994 systemd-logind[1872]: New session 2 of user core. Jan 13 20:38:50.970580 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:38:51.000326 amazon-ssm-agent[1993]: 2025-01-13 20:38:50 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 13 20:38:51.098553 sshd[2147]: Connection closed by 139.178.89.65 port 40616 Jan 13 20:38:51.100108 sshd-session[2134]: pam_unix(sshd:session): session closed for user core Jan 13 20:38:51.104592 systemd[1]: sshd@1-172.31.21.52:22-139.178.89.65:40616.service: Deactivated successfully. Jan 13 20:38:51.107940 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:38:51.111102 systemd-logind[1872]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:38:51.112773 systemd-logind[1872]: Removed session 2. Jan 13 20:38:51.117402 kubelet[2112]: E0113 20:38:51.117320 2112 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:38:51.126054 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:38:51.126239 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:38:51.126792 systemd[1]: kubelet.service: Consumed 1.064s CPU time. Jan 13 20:38:51.135618 systemd[1]: Started sshd@2-172.31.21.52:22-139.178.89.65:40622.service - OpenSSH per-connection server daemon (139.178.89.65:40622). Jan 13 20:38:51.339917 sshd[2155]: Accepted publickey for core from 139.178.89.65 port 40622 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:38:51.341724 sshd-session[2155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:38:51.349230 systemd-logind[1872]: New session 3 of user core. Jan 13 20:38:51.359544 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:38:51.474346 sshd[2157]: Connection closed by 139.178.89.65 port 40622 Jan 13 20:38:51.475035 sshd-session[2155]: pam_unix(sshd:session): session closed for user core Jan 13 20:38:51.479379 systemd[1]: sshd@2-172.31.21.52:22-139.178.89.65:40622.service: Deactivated successfully. Jan 13 20:38:51.481505 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:38:51.484828 systemd-logind[1872]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:38:51.486325 systemd-logind[1872]: Removed session 3. Jan 13 20:38:51.511608 systemd[1]: Started sshd@3-172.31.21.52:22-139.178.89.65:40636.service - OpenSSH per-connection server daemon (139.178.89.65:40636). Jan 13 20:38:51.676946 sshd[2162]: Accepted publickey for core from 139.178.89.65 port 40636 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:38:51.678045 sshd-session[2162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:38:51.683077 systemd-logind[1872]: New session 4 of user core. Jan 13 20:38:51.688478 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:38:51.811856 sshd[2164]: Connection closed by 139.178.89.65 port 40636 Jan 13 20:38:51.812644 sshd-session[2162]: pam_unix(sshd:session): session closed for user core Jan 13 20:38:51.816675 systemd[1]: sshd@3-172.31.21.52:22-139.178.89.65:40636.service: Deactivated successfully. Jan 13 20:38:51.819422 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:38:51.820262 systemd-logind[1872]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:38:51.821440 systemd-logind[1872]: Removed session 4. Jan 13 20:38:51.860666 systemd[1]: Started sshd@4-172.31.21.52:22-139.178.89.65:40642.service - OpenSSH per-connection server daemon (139.178.89.65:40642). Jan 13 20:38:52.021434 sshd[2169]: Accepted publickey for core from 139.178.89.65 port 40642 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:38:52.022962 sshd-session[2169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:38:52.027961 systemd-logind[1872]: New session 5 of user core. Jan 13 20:38:52.033446 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:38:52.202137 sudo[2172]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:38:52.202649 sudo[2172]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:38:52.216029 sudo[2172]: pam_unix(sudo:session): session closed for user root Jan 13 20:38:52.238349 sshd[2171]: Connection closed by 139.178.89.65 port 40642 Jan 13 20:38:52.239516 sshd-session[2169]: pam_unix(sshd:session): session closed for user core Jan 13 20:38:52.245076 systemd[1]: sshd@4-172.31.21.52:22-139.178.89.65:40642.service: Deactivated successfully. Jan 13 20:38:52.247219 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:38:52.249223 systemd-logind[1872]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:38:52.250528 systemd-logind[1872]: Removed session 5. Jan 13 20:38:52.275649 systemd[1]: Started sshd@5-172.31.21.52:22-139.178.89.65:40656.service - OpenSSH per-connection server daemon (139.178.89.65:40656). Jan 13 20:38:52.441699 sshd[2178]: Accepted publickey for core from 139.178.89.65 port 40656 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:38:52.443470 sshd-session[2178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:38:52.449309 systemd-logind[1872]: New session 6 of user core. Jan 13 20:38:52.458562 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:38:52.564641 sudo[2182]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:38:52.565053 sudo[2182]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:38:52.580920 sudo[2182]: pam_unix(sudo:session): session closed for user root Jan 13 20:38:52.593085 sudo[2181]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:38:52.593515 sudo[2181]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:38:52.624791 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:38:52.670022 augenrules[2204]: No rules Jan 13 20:38:52.672047 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:38:52.672304 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:38:52.673831 sudo[2181]: pam_unix(sudo:session): session closed for user root Jan 13 20:38:52.696219 sshd[2180]: Connection closed by 139.178.89.65 port 40656 Jan 13 20:38:52.696849 sshd-session[2178]: pam_unix(sshd:session): session closed for user core Jan 13 20:38:52.701926 systemd[1]: sshd@5-172.31.21.52:22-139.178.89.65:40656.service: Deactivated successfully. Jan 13 20:38:52.704107 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:38:52.706387 systemd-logind[1872]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:38:52.707775 systemd-logind[1872]: Removed session 6. Jan 13 20:38:52.732271 systemd[1]: Started sshd@6-172.31.21.52:22-139.178.89.65:40672.service - OpenSSH per-connection server daemon (139.178.89.65:40672). Jan 13 20:38:52.930517 sshd[2212]: Accepted publickey for core from 139.178.89.65 port 40672 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:38:52.939471 sshd-session[2212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:38:52.959386 systemd-logind[1872]: New session 7 of user core. Jan 13 20:38:52.972091 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:38:53.084084 sudo[2215]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:38:53.084606 sudo[2215]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:38:53.932107 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:38:53.943906 (dockerd)[2233]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:38:54.637633 dockerd[2233]: time="2025-01-13T20:38:54.637565817Z" level=info msg="Starting up" Jan 13 20:38:55.129000 systemd-resolved[1687]: Clock change detected. Flushing caches. Jan 13 20:38:55.667070 systemd[1]: var-lib-docker-metacopy\x2dcheck3814705316-merged.mount: Deactivated successfully. Jan 13 20:38:55.703563 dockerd[2233]: time="2025-01-13T20:38:55.703510086Z" level=info msg="Loading containers: start." Jan 13 20:38:56.003639 kernel: Initializing XFRM netlink socket Jan 13 20:38:56.038369 (udev-worker)[2256]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:38:56.119250 systemd-networkd[1739]: docker0: Link UP Jan 13 20:38:56.155065 dockerd[2233]: time="2025-01-13T20:38:56.155019328Z" level=info msg="Loading containers: done." Jan 13 20:38:56.206055 dockerd[2233]: time="2025-01-13T20:38:56.205993654Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:38:56.206749 dockerd[2233]: time="2025-01-13T20:38:56.206169177Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 13 20:38:56.206814 dockerd[2233]: time="2025-01-13T20:38:56.206758377Z" level=info msg="Daemon has completed initialization" Jan 13 20:38:56.291729 dockerd[2233]: time="2025-01-13T20:38:56.290745697Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:38:56.291271 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:38:57.884645 containerd[1892]: time="2025-01-13T20:38:57.883247703Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Jan 13 20:38:58.685395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1842599533.mount: Deactivated successfully. Jan 13 20:39:01.825238 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:39:01.843787 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:39:02.208802 containerd[1892]: time="2025-01-13T20:39:02.199380850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:02.213758 containerd[1892]: time="2025-01-13T20:39:02.213423536Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675642" Jan 13 20:39:02.233176 containerd[1892]: time="2025-01-13T20:39:02.233048901Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:02.259127 containerd[1892]: time="2025-01-13T20:39:02.259069830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:02.260382 containerd[1892]: time="2025-01-13T20:39:02.260333497Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 4.37703823s" Jan 13 20:39:02.260382 containerd[1892]: time="2025-01-13T20:39:02.260387935Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Jan 13 20:39:02.348262 containerd[1892]: time="2025-01-13T20:39:02.348231506Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Jan 13 20:39:02.677872 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:39:02.699481 (kubelet)[2493]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:39:02.897197 kubelet[2493]: E0113 20:39:02.897050 2493 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:39:02.903437 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:39:02.903673 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:39:05.940494 containerd[1892]: time="2025-01-13T20:39:05.940439376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:05.942713 containerd[1892]: time="2025-01-13T20:39:05.942632612Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606409" Jan 13 20:39:05.944848 containerd[1892]: time="2025-01-13T20:39:05.944779425Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:05.949777 containerd[1892]: time="2025-01-13T20:39:05.948960700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:05.950939 containerd[1892]: time="2025-01-13T20:39:05.950893764Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 3.602474864s" Jan 13 20:39:05.951051 containerd[1892]: time="2025-01-13T20:39:05.950943859Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Jan 13 20:39:05.992196 containerd[1892]: time="2025-01-13T20:39:05.992091173Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Jan 13 20:39:08.431592 containerd[1892]: time="2025-01-13T20:39:08.431535410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:08.433239 containerd[1892]: time="2025-01-13T20:39:08.433046772Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783035" Jan 13 20:39:08.435201 containerd[1892]: time="2025-01-13T20:39:08.434848007Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:08.438415 containerd[1892]: time="2025-01-13T20:39:08.438381657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:08.439566 containerd[1892]: time="2025-01-13T20:39:08.439531690Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 2.447336719s" Jan 13 20:39:08.439810 containerd[1892]: time="2025-01-13T20:39:08.439759868Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Jan 13 20:39:08.480885 containerd[1892]: time="2025-01-13T20:39:08.480820443Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Jan 13 20:39:10.200704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1249652737.mount: Deactivated successfully. Jan 13 20:39:10.907586 containerd[1892]: time="2025-01-13T20:39:10.907520798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:10.909367 containerd[1892]: time="2025-01-13T20:39:10.909170827Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057470" Jan 13 20:39:10.911814 containerd[1892]: time="2025-01-13T20:39:10.911123318Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:10.915205 containerd[1892]: time="2025-01-13T20:39:10.915147857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:10.916635 containerd[1892]: time="2025-01-13T20:39:10.916324893Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 2.43544427s" Jan 13 20:39:10.916635 containerd[1892]: time="2025-01-13T20:39:10.916544279Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Jan 13 20:39:10.945220 containerd[1892]: time="2025-01-13T20:39:10.945173668Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:39:11.574224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1623865342.mount: Deactivated successfully. Jan 13 20:39:13.154354 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:39:13.164922 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:39:15.244198 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:39:15.258926 (kubelet)[2573]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:39:15.347026 kubelet[2573]: E0113 20:39:15.346962 2573 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:39:15.351158 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:39:15.351363 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:39:16.021688 containerd[1892]: time="2025-01-13T20:39:16.021633704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:16.023641 containerd[1892]: time="2025-01-13T20:39:16.023440991Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 13 20:39:16.025301 containerd[1892]: time="2025-01-13T20:39:16.025239464Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:16.029106 containerd[1892]: time="2025-01-13T20:39:16.029047245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:16.030544 containerd[1892]: time="2025-01-13T20:39:16.030429189Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 5.085220369s" Jan 13 20:39:16.030544 containerd[1892]: time="2025-01-13T20:39:16.030540574Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 20:39:16.060833 containerd[1892]: time="2025-01-13T20:39:16.060789133Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 20:39:16.585764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3548762486.mount: Deactivated successfully. Jan 13 20:39:16.608362 containerd[1892]: time="2025-01-13T20:39:16.608305830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:16.609824 containerd[1892]: time="2025-01-13T20:39:16.609632976Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 13 20:39:16.613088 containerd[1892]: time="2025-01-13T20:39:16.611569603Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:16.614743 containerd[1892]: time="2025-01-13T20:39:16.614707795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:16.617745 containerd[1892]: time="2025-01-13T20:39:16.615994529Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 555.1593ms" Jan 13 20:39:16.617745 containerd[1892]: time="2025-01-13T20:39:16.616035129Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 13 20:39:16.648013 containerd[1892]: time="2025-01-13T20:39:16.647967846Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 13 20:39:17.267114 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4139435352.mount: Deactivated successfully. Jan 13 20:39:19.374982 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 13 20:39:21.489317 containerd[1892]: time="2025-01-13T20:39:21.489261959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:21.490805 containerd[1892]: time="2025-01-13T20:39:21.490693998Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 13 20:39:21.492701 containerd[1892]: time="2025-01-13T20:39:21.492225300Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:21.497108 containerd[1892]: time="2025-01-13T20:39:21.495761257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:39:21.497108 containerd[1892]: time="2025-01-13T20:39:21.496937455Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.848932005s" Jan 13 20:39:21.497108 containerd[1892]: time="2025-01-13T20:39:21.496976698Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 13 20:39:25.385278 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 20:39:25.397037 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:39:25.853127 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:39:25.853309 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:39:25.853845 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:39:25.864297 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:39:25.907108 systemd[1]: Reloading requested from client PID 2726 ('systemctl') (unit session-7.scope)... Jan 13 20:39:25.907122 systemd[1]: Reloading... Jan 13 20:39:26.055641 zram_generator::config[2766]: No configuration found. Jan 13 20:39:26.251092 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:39:26.417231 systemd[1]: Reloading finished in 509 ms. Jan 13 20:39:26.488904 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:39:26.489270 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:39:26.489950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:39:26.495077 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:39:27.099148 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:39:27.117264 (kubelet)[2823]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:39:27.210916 kubelet[2823]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:39:27.210916 kubelet[2823]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:39:27.210916 kubelet[2823]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:39:27.211363 kubelet[2823]: I0113 20:39:27.211004 2823 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:39:27.415925 kubelet[2823]: I0113 20:39:27.415872 2823 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 20:39:27.415925 kubelet[2823]: I0113 20:39:27.415909 2823 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:39:27.416231 kubelet[2823]: I0113 20:39:27.416207 2823 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 20:39:27.455460 kubelet[2823]: I0113 20:39:27.454453 2823 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:39:27.455460 kubelet[2823]: E0113 20:39:27.454967 2823 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.21.52:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.21.52:6443: connect: connection refused Jan 13 20:39:27.473776 kubelet[2823]: I0113 20:39:27.473740 2823 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:39:27.478778 kubelet[2823]: I0113 20:39:27.478713 2823 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:39:27.482342 kubelet[2823]: I0113 20:39:27.478787 2823 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-52","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:39:27.483466 kubelet[2823]: I0113 20:39:27.483377 2823 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:39:27.483466 kubelet[2823]: I0113 20:39:27.483465 2823 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:39:27.487502 kubelet[2823]: I0113 20:39:27.487461 2823 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:39:27.489386 kubelet[2823]: W0113 20:39:27.489322 2823 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.21.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-52&limit=500&resourceVersion=0": dial tcp 172.31.21.52:6443: connect: connection refused Jan 13 20:39:27.489491 kubelet[2823]: E0113 20:39:27.489412 2823 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.21.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-52&limit=500&resourceVersion=0": dial tcp 172.31.21.52:6443: connect: connection refused Jan 13 20:39:27.490794 kubelet[2823]: I0113 20:39:27.490768 2823 kubelet.go:400] "Attempting to sync node with API server" Jan 13 20:39:27.490987 kubelet[2823]: I0113 20:39:27.490931 2823 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:39:27.491036 kubelet[2823]: I0113 20:39:27.490990 2823 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:39:27.491036 kubelet[2823]: I0113 20:39:27.491017 2823 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:39:27.497336 kubelet[2823]: W0113 20:39:27.497262 2823 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.21.52:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.52:6443: connect: connection refused Jan 13 20:39:27.497336 kubelet[2823]: E0113 20:39:27.497336 2823 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.21.52:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.52:6443: connect: connection refused Jan 13 20:39:27.497542 kubelet[2823]: I0113 20:39:27.497448 2823 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:39:27.501048 kubelet[2823]: I0113 20:39:27.499624 2823 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:39:27.501048 kubelet[2823]: W0113 20:39:27.499945 2823 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:39:27.501048 kubelet[2823]: I0113 20:39:27.500779 2823 server.go:1264] "Started kubelet" Jan 13 20:39:27.507720 kubelet[2823]: I0113 20:39:27.507523 2823 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:39:27.509439 kubelet[2823]: I0113 20:39:27.508897 2823 server.go:455] "Adding debug handlers to kubelet server" Jan 13 20:39:27.512167 kubelet[2823]: I0113 20:39:27.511415 2823 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:39:27.512167 kubelet[2823]: I0113 20:39:27.511837 2823 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:39:27.512167 kubelet[2823]: E0113 20:39:27.512022 2823 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.21.52:6443/api/v1/namespaces/default/events\": dial tcp 172.31.21.52:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-21-52.181a5b18988206dd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-52,UID:ip-172-31-21-52,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-21-52,},FirstTimestamp:2025-01-13 20:39:27.500748509 +0000 UTC m=+0.373985623,LastTimestamp:2025-01-13 20:39:27.500748509 +0000 UTC m=+0.373985623,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-52,}" Jan 13 20:39:27.515218 kubelet[2823]: I0113 20:39:27.515075 2823 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:39:27.525571 kubelet[2823]: E0113 20:39:27.524538 2823 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-21-52\" not found" Jan 13 20:39:27.525571 kubelet[2823]: I0113 20:39:27.524619 2823 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:39:27.528030 kubelet[2823]: I0113 20:39:27.527484 2823 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 20:39:27.528030 kubelet[2823]: I0113 20:39:27.527590 2823 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:39:27.528305 kubelet[2823]: W0113 20:39:27.528253 2823 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.21.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.52:6443: connect: connection refused Jan 13 20:39:27.528376 kubelet[2823]: E0113 20:39:27.528322 2823 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.21.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.52:6443: connect: connection refused Jan 13 20:39:27.528523 kubelet[2823]: E0113 20:39:27.528501 2823 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:39:27.528918 kubelet[2823]: E0113 20:39:27.528883 2823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-52?timeout=10s\": dial tcp 172.31.21.52:6443: connect: connection refused" interval="200ms" Jan 13 20:39:27.529886 kubelet[2823]: I0113 20:39:27.529793 2823 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:39:27.530528 kubelet[2823]: I0113 20:39:27.529960 2823 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:39:27.532310 kubelet[2823]: I0113 20:39:27.532288 2823 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:39:27.561493 kubelet[2823]: I0113 20:39:27.561438 2823 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:39:27.563968 kubelet[2823]: I0113 20:39:27.563940 2823 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:39:27.564431 kubelet[2823]: I0113 20:39:27.564066 2823 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:39:27.564431 kubelet[2823]: I0113 20:39:27.564094 2823 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 20:39:27.564431 kubelet[2823]: E0113 20:39:27.564134 2823 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:39:27.571948 kubelet[2823]: W0113 20:39:27.571891 2823 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.21.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.52:6443: connect: connection refused Jan 13 20:39:27.571948 kubelet[2823]: E0113 20:39:27.571953 2823 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.21.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.52:6443: connect: connection refused Jan 13 20:39:27.573400 kubelet[2823]: I0113 20:39:27.573362 2823 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:39:27.573400 kubelet[2823]: I0113 20:39:27.573385 2823 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:39:27.573400 kubelet[2823]: I0113 20:39:27.573405 2823 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:39:27.576670 kubelet[2823]: I0113 20:39:27.576645 2823 policy_none.go:49] "None policy: Start" Jan 13 20:39:27.579371 kubelet[2823]: I0113 20:39:27.579351 2823 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:39:27.579516 kubelet[2823]: I0113 20:39:27.579380 2823 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:39:27.591174 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:39:27.606980 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:39:27.618598 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:39:27.622626 kubelet[2823]: I0113 20:39:27.621650 2823 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:39:27.622626 kubelet[2823]: I0113 20:39:27.621846 2823 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:39:27.622626 kubelet[2823]: I0113 20:39:27.621968 2823 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:39:27.627783 kubelet[2823]: E0113 20:39:27.627759 2823 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-21-52\" not found" Jan 13 20:39:27.630029 kubelet[2823]: I0113 20:39:27.629974 2823 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-52" Jan 13 20:39:27.630543 kubelet[2823]: E0113 20:39:27.630520 2823 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.21.52:6443/api/v1/nodes\": dial tcp 172.31.21.52:6443: connect: connection refused" node="ip-172-31-21-52" Jan 13 20:39:27.665002 kubelet[2823]: I0113 20:39:27.664946 2823 topology_manager.go:215] "Topology Admit Handler" podUID="7396c0d409f8919262ce0e94a06b6246" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-21-52" Jan 13 20:39:27.667688 kubelet[2823]: I0113 20:39:27.667473 2823 topology_manager.go:215] "Topology Admit Handler" podUID="a5b139e13870220e11acb107194a98fc" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-21-52" Jan 13 20:39:27.670072 kubelet[2823]: I0113 20:39:27.669216 2823 topology_manager.go:215] "Topology Admit Handler" podUID="0a547c8de6336b872fdf8ba5963d9497" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-21-52" Jan 13 20:39:27.681003 systemd[1]: Created slice kubepods-burstable-poda5b139e13870220e11acb107194a98fc.slice - libcontainer container kubepods-burstable-poda5b139e13870220e11acb107194a98fc.slice. Jan 13 20:39:27.698879 systemd[1]: Created slice kubepods-burstable-pod7396c0d409f8919262ce0e94a06b6246.slice - libcontainer container kubepods-burstable-pod7396c0d409f8919262ce0e94a06b6246.slice. Jan 13 20:39:27.704490 systemd[1]: Created slice kubepods-burstable-pod0a547c8de6336b872fdf8ba5963d9497.slice - libcontainer container kubepods-burstable-pod0a547c8de6336b872fdf8ba5963d9497.slice. Jan 13 20:39:27.730915 kubelet[2823]: E0113 20:39:27.730816 2823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-52?timeout=10s\": dial tcp 172.31.21.52:6443: connect: connection refused" interval="400ms" Jan 13 20:39:27.828720 kubelet[2823]: I0113 20:39:27.828602 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a5b139e13870220e11acb107194a98fc-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-52\" (UID: \"a5b139e13870220e11acb107194a98fc\") " pod="kube-system/kube-controller-manager-ip-172-31-21-52" Jan 13 20:39:27.828986 kubelet[2823]: I0113 20:39:27.828728 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a547c8de6336b872fdf8ba5963d9497-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-52\" (UID: \"0a547c8de6336b872fdf8ba5963d9497\") " pod="kube-system/kube-scheduler-ip-172-31-21-52" Jan 13 20:39:27.828986 kubelet[2823]: I0113 20:39:27.828770 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a5b139e13870220e11acb107194a98fc-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-52\" (UID: \"a5b139e13870220e11acb107194a98fc\") " pod="kube-system/kube-controller-manager-ip-172-31-21-52" Jan 13 20:39:27.828986 kubelet[2823]: I0113 20:39:27.828797 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a5b139e13870220e11acb107194a98fc-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-52\" (UID: \"a5b139e13870220e11acb107194a98fc\") " pod="kube-system/kube-controller-manager-ip-172-31-21-52" Jan 13 20:39:27.828986 kubelet[2823]: I0113 20:39:27.828822 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7396c0d409f8919262ce0e94a06b6246-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-52\" (UID: \"7396c0d409f8919262ce0e94a06b6246\") " pod="kube-system/kube-apiserver-ip-172-31-21-52" Jan 13 20:39:27.828986 kubelet[2823]: I0113 20:39:27.828888 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a5b139e13870220e11acb107194a98fc-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-52\" (UID: \"a5b139e13870220e11acb107194a98fc\") " pod="kube-system/kube-controller-manager-ip-172-31-21-52" Jan 13 20:39:27.829392 kubelet[2823]: I0113 20:39:27.828917 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a5b139e13870220e11acb107194a98fc-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-52\" (UID: \"a5b139e13870220e11acb107194a98fc\") " pod="kube-system/kube-controller-manager-ip-172-31-21-52" Jan 13 20:39:27.829392 kubelet[2823]: I0113 20:39:27.828952 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7396c0d409f8919262ce0e94a06b6246-ca-certs\") pod \"kube-apiserver-ip-172-31-21-52\" (UID: \"7396c0d409f8919262ce0e94a06b6246\") " pod="kube-system/kube-apiserver-ip-172-31-21-52" Jan 13 20:39:27.829392 kubelet[2823]: I0113 20:39:27.828980 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7396c0d409f8919262ce0e94a06b6246-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-52\" (UID: \"7396c0d409f8919262ce0e94a06b6246\") " pod="kube-system/kube-apiserver-ip-172-31-21-52" Jan 13 20:39:27.833266 kubelet[2823]: I0113 20:39:27.833235 2823 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-52" Jan 13 20:39:27.833803 kubelet[2823]: E0113 20:39:27.833767 2823 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.21.52:6443/api/v1/nodes\": dial tcp 172.31.21.52:6443: connect: connection refused" node="ip-172-31-21-52" Jan 13 20:39:27.997396 containerd[1892]: time="2025-01-13T20:39:27.997083342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-52,Uid:a5b139e13870220e11acb107194a98fc,Namespace:kube-system,Attempt:0,}" Jan 13 20:39:28.006471 containerd[1892]: time="2025-01-13T20:39:28.006422839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-52,Uid:7396c0d409f8919262ce0e94a06b6246,Namespace:kube-system,Attempt:0,}" Jan 13 20:39:28.012097 containerd[1892]: time="2025-01-13T20:39:28.012049639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-52,Uid:0a547c8de6336b872fdf8ba5963d9497,Namespace:kube-system,Attempt:0,}" Jan 13 20:39:28.131586 kubelet[2823]: E0113 20:39:28.131533 2823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-52?timeout=10s\": dial tcp 172.31.21.52:6443: connect: connection refused" interval="800ms" Jan 13 20:39:28.236546 kubelet[2823]: I0113 20:39:28.236515 2823 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-52" Jan 13 20:39:28.237421 kubelet[2823]: E0113 20:39:28.237108 2823 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.21.52:6443/api/v1/nodes\": dial tcp 172.31.21.52:6443: connect: connection refused" node="ip-172-31-21-52" Jan 13 20:39:28.449945 kubelet[2823]: W0113 20:39:28.449885 2823 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.21.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-52&limit=500&resourceVersion=0": dial tcp 172.31.21.52:6443: connect: connection refused Jan 13 20:39:28.449945 kubelet[2823]: E0113 20:39:28.449950 2823 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.21.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-52&limit=500&resourceVersion=0": dial tcp 172.31.21.52:6443: connect: connection refused Jan 13 20:39:28.854866 kubelet[2823]: W0113 20:39:28.854729 2823 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.21.52:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.52:6443: connect: connection refused Jan 13 20:39:28.854866 kubelet[2823]: E0113 20:39:28.854795 2823 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.21.52:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.52:6443: connect: connection refused Jan 13 20:39:28.890285 kubelet[2823]: W0113 20:39:28.890187 2823 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.21.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.52:6443: connect: connection refused Jan 13 20:39:28.890285 kubelet[2823]: E0113 20:39:28.890262 2823 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.21.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.52:6443: connect: connection refused Jan 13 20:39:28.901141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3219756319.mount: Deactivated successfully. Jan 13 20:39:28.918316 containerd[1892]: time="2025-01-13T20:39:28.918260324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:39:28.925987 containerd[1892]: time="2025-01-13T20:39:28.925870248Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 20:39:28.927462 containerd[1892]: time="2025-01-13T20:39:28.927421703Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:39:28.929689 containerd[1892]: time="2025-01-13T20:39:28.929648389Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:39:28.932340 kubelet[2823]: E0113 20:39:28.932289 2823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-52?timeout=10s\": dial tcp 172.31.21.52:6443: connect: connection refused" interval="1.6s" Jan 13 20:39:28.933173 containerd[1892]: time="2025-01-13T20:39:28.932997114Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:39:28.937645 containerd[1892]: time="2025-01-13T20:39:28.937575688Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:39:28.937777 containerd[1892]: time="2025-01-13T20:39:28.937725552Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:39:28.939721 containerd[1892]: time="2025-01-13T20:39:28.939659694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:39:28.942247 containerd[1892]: time="2025-01-13T20:39:28.941998863Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 930.15579ms" Jan 13 20:39:28.949531 containerd[1892]: time="2025-01-13T20:39:28.949484717Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 942.964882ms" Jan 13 20:39:28.950539 containerd[1892]: time="2025-01-13T20:39:28.950500962Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 938.365581ms" Jan 13 20:39:29.058830 kubelet[2823]: I0113 20:39:29.058795 2823 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-52" Jan 13 20:39:29.059162 kubelet[2823]: E0113 20:39:29.059124 2823 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.21.52:6443/api/v1/nodes\": dial tcp 172.31.21.52:6443: connect: connection refused" node="ip-172-31-21-52" Jan 13 20:39:29.154033 kubelet[2823]: W0113 20:39:29.153081 2823 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.21.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.52:6443: connect: connection refused Jan 13 20:39:29.154033 kubelet[2823]: E0113 20:39:29.153163 2823 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.21.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.52:6443: connect: connection refused Jan 13 20:39:29.258417 containerd[1892]: time="2025-01-13T20:39:29.258310930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:39:29.258901 containerd[1892]: time="2025-01-13T20:39:29.258387105Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:39:29.258901 containerd[1892]: time="2025-01-13T20:39:29.258408266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:39:29.258901 containerd[1892]: time="2025-01-13T20:39:29.258506731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:39:29.264888 containerd[1892]: time="2025-01-13T20:39:29.264213666Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:39:29.264888 containerd[1892]: time="2025-01-13T20:39:29.264297154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:39:29.264888 containerd[1892]: time="2025-01-13T20:39:29.264321473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:39:29.264888 containerd[1892]: time="2025-01-13T20:39:29.264432133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:39:29.268708 containerd[1892]: time="2025-01-13T20:39:29.257924065Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:39:29.268708 containerd[1892]: time="2025-01-13T20:39:29.268200853Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:39:29.268708 containerd[1892]: time="2025-01-13T20:39:29.268229843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:39:29.268708 containerd[1892]: time="2025-01-13T20:39:29.268328341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:39:29.336104 systemd[1]: Started cri-containerd-12052ec5ebb1cad0a5f187d00ca728907416c68ab5477080bf16f7f450fbc54d.scope - libcontainer container 12052ec5ebb1cad0a5f187d00ca728907416c68ab5477080bf16f7f450fbc54d. Jan 13 20:39:29.339061 systemd[1]: Started cri-containerd-e5770a3867802437459b61f61c39b68ea33083cbc90bd9115a91be78d91f3900.scope - libcontainer container e5770a3867802437459b61f61c39b68ea33083cbc90bd9115a91be78d91f3900. Jan 13 20:39:29.352046 systemd[1]: Started cri-containerd-50d03434c4a5cd095476c93b05b25475e21a70e46b5ed5175948b12f664f4b89.scope - libcontainer container 50d03434c4a5cd095476c93b05b25475e21a70e46b5ed5175948b12f664f4b89. Jan 13 20:39:29.468543 containerd[1892]: time="2025-01-13T20:39:29.468426726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-52,Uid:7396c0d409f8919262ce0e94a06b6246,Namespace:kube-system,Attempt:0,} returns sandbox id \"12052ec5ebb1cad0a5f187d00ca728907416c68ab5477080bf16f7f450fbc54d\"" Jan 13 20:39:29.477207 containerd[1892]: time="2025-01-13T20:39:29.477147486Z" level=info msg="CreateContainer within sandbox \"12052ec5ebb1cad0a5f187d00ca728907416c68ab5477080bf16f7f450fbc54d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:39:29.480375 containerd[1892]: time="2025-01-13T20:39:29.480313767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-52,Uid:a5b139e13870220e11acb107194a98fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"50d03434c4a5cd095476c93b05b25475e21a70e46b5ed5175948b12f664f4b89\"" Jan 13 20:39:29.484377 containerd[1892]: time="2025-01-13T20:39:29.484327473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-52,Uid:0a547c8de6336b872fdf8ba5963d9497,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5770a3867802437459b61f61c39b68ea33083cbc90bd9115a91be78d91f3900\"" Jan 13 20:39:29.487670 containerd[1892]: time="2025-01-13T20:39:29.487501564Z" level=info msg="CreateContainer within sandbox \"50d03434c4a5cd095476c93b05b25475e21a70e46b5ed5175948b12f664f4b89\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:39:29.493001 containerd[1892]: time="2025-01-13T20:39:29.492761379Z" level=info msg="CreateContainer within sandbox \"e5770a3867802437459b61f61c39b68ea33083cbc90bd9115a91be78d91f3900\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:39:29.539182 kubelet[2823]: E0113 20:39:29.539137 2823 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.21.52:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.21.52:6443: connect: connection refused Jan 13 20:39:29.542326 containerd[1892]: time="2025-01-13T20:39:29.541959170Z" level=info msg="CreateContainer within sandbox \"12052ec5ebb1cad0a5f187d00ca728907416c68ab5477080bf16f7f450fbc54d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bb42ef1490c300a04161f446da2ec31a258d62417dab15660b3abca3df9e7621\"" Jan 13 20:39:29.544258 containerd[1892]: time="2025-01-13T20:39:29.544220819Z" level=info msg="CreateContainer within sandbox \"50d03434c4a5cd095476c93b05b25475e21a70e46b5ed5175948b12f664f4b89\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ee86b973aeb6d9df22ff82f13ce479c948f66714e76dac2a58f856602e6b62cd\"" Jan 13 20:39:29.544563 containerd[1892]: time="2025-01-13T20:39:29.544531553Z" level=info msg="StartContainer for \"bb42ef1490c300a04161f446da2ec31a258d62417dab15660b3abca3df9e7621\"" Jan 13 20:39:29.548197 containerd[1892]: time="2025-01-13T20:39:29.546632730Z" level=info msg="CreateContainer within sandbox \"e5770a3867802437459b61f61c39b68ea33083cbc90bd9115a91be78d91f3900\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0ff1a239fbc48f7d20239da3bcafb52ba8163c03a8ff889c7d45f437130a4040\"" Jan 13 20:39:29.548197 containerd[1892]: time="2025-01-13T20:39:29.546836341Z" level=info msg="StartContainer for \"ee86b973aeb6d9df22ff82f13ce479c948f66714e76dac2a58f856602e6b62cd\"" Jan 13 20:39:29.561072 containerd[1892]: time="2025-01-13T20:39:29.561033123Z" level=info msg="StartContainer for \"0ff1a239fbc48f7d20239da3bcafb52ba8163c03a8ff889c7d45f437130a4040\"" Jan 13 20:39:29.591246 systemd[1]: Started cri-containerd-ee86b973aeb6d9df22ff82f13ce479c948f66714e76dac2a58f856602e6b62cd.scope - libcontainer container ee86b973aeb6d9df22ff82f13ce479c948f66714e76dac2a58f856602e6b62cd. Jan 13 20:39:29.616877 systemd[1]: Started cri-containerd-bb42ef1490c300a04161f446da2ec31a258d62417dab15660b3abca3df9e7621.scope - libcontainer container bb42ef1490c300a04161f446da2ec31a258d62417dab15660b3abca3df9e7621. Jan 13 20:39:29.648211 systemd[1]: Started cri-containerd-0ff1a239fbc48f7d20239da3bcafb52ba8163c03a8ff889c7d45f437130a4040.scope - libcontainer container 0ff1a239fbc48f7d20239da3bcafb52ba8163c03a8ff889c7d45f437130a4040. Jan 13 20:39:29.739850 containerd[1892]: time="2025-01-13T20:39:29.739285835Z" level=info msg="StartContainer for \"ee86b973aeb6d9df22ff82f13ce479c948f66714e76dac2a58f856602e6b62cd\" returns successfully" Jan 13 20:39:29.786109 containerd[1892]: time="2025-01-13T20:39:29.785839113Z" level=info msg="StartContainer for \"bb42ef1490c300a04161f446da2ec31a258d62417dab15660b3abca3df9e7621\" returns successfully" Jan 13 20:39:29.821267 containerd[1892]: time="2025-01-13T20:39:29.820978703Z" level=info msg="StartContainer for \"0ff1a239fbc48f7d20239da3bcafb52ba8163c03a8ff889c7d45f437130a4040\" returns successfully" Jan 13 20:39:30.533302 kubelet[2823]: E0113 20:39:30.533248 2823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-52?timeout=10s\": dial tcp 172.31.21.52:6443: connect: connection refused" interval="3.2s" Jan 13 20:39:30.544129 kubelet[2823]: W0113 20:39:30.544079 2823 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.21.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.52:6443: connect: connection refused Jan 13 20:39:30.545629 kubelet[2823]: E0113 20:39:30.544673 2823 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.21.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.52:6443: connect: connection refused Jan 13 20:39:30.663631 kubelet[2823]: I0113 20:39:30.661509 2823 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-52" Jan 13 20:39:33.067569 kubelet[2823]: E0113 20:39:33.067330 2823 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-21-52.181a5b18988206dd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-52,UID:ip-172-31-21-52,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-21-52,},FirstTimestamp:2025-01-13 20:39:27.500748509 +0000 UTC m=+0.373985623,LastTimestamp:2025-01-13 20:39:27.500748509 +0000 UTC m=+0.373985623,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-52,}" Jan 13 20:39:33.126458 kubelet[2823]: I0113 20:39:33.126417 2823 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-21-52" Jan 13 20:39:33.209443 update_engine[1874]: I20250113 20:39:33.209367 1874 update_attempter.cc:509] Updating boot flags... Jan 13 20:39:33.297648 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 44 scanned by (udev-worker) (3112) Jan 13 20:39:33.495338 kubelet[2823]: I0113 20:39:33.495113 2823 apiserver.go:52] "Watching apiserver" Jan 13 20:39:33.528818 kubelet[2823]: I0113 20:39:33.527839 2823 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 20:39:33.571655 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 44 scanned by (udev-worker) (3112) Jan 13 20:39:33.640860 kubelet[2823]: E0113 20:39:33.640705 2823 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-21-52\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-21-52" Jan 13 20:39:35.590820 systemd[1]: Reloading requested from client PID 3281 ('systemctl') (unit session-7.scope)... Jan 13 20:39:35.590840 systemd[1]: Reloading... Jan 13 20:39:35.736035 zram_generator::config[3324]: No configuration found. Jan 13 20:39:35.961485 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:39:36.168658 systemd[1]: Reloading finished in 577 ms. Jan 13 20:39:36.239636 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:39:36.251874 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:39:36.252264 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:39:36.263975 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:39:37.033188 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:39:37.048175 (kubelet)[3378]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:39:37.131190 kubelet[3378]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:39:37.131190 kubelet[3378]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:39:37.131190 kubelet[3378]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:39:37.131799 kubelet[3378]: I0113 20:39:37.131236 3378 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:39:37.153051 kubelet[3378]: I0113 20:39:37.152742 3378 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 20:39:37.153051 kubelet[3378]: I0113 20:39:37.152774 3378 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:39:37.153372 kubelet[3378]: I0113 20:39:37.153112 3378 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 20:39:37.159652 kubelet[3378]: I0113 20:39:37.159575 3378 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:39:37.161841 kubelet[3378]: I0113 20:39:37.161807 3378 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:39:37.180565 kubelet[3378]: I0113 20:39:37.180498 3378 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:39:37.181142 kubelet[3378]: I0113 20:39:37.180994 3378 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:39:37.181821 kubelet[3378]: I0113 20:39:37.181141 3378 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-52","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:39:37.181953 kubelet[3378]: I0113 20:39:37.181839 3378 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:39:37.181953 kubelet[3378]: I0113 20:39:37.181853 3378 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:39:37.182053 kubelet[3378]: I0113 20:39:37.181977 3378 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:39:37.182128 kubelet[3378]: I0113 20:39:37.182110 3378 kubelet.go:400] "Attempting to sync node with API server" Jan 13 20:39:37.182169 kubelet[3378]: I0113 20:39:37.182132 3378 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:39:37.185778 kubelet[3378]: I0113 20:39:37.185669 3378 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:39:37.185778 kubelet[3378]: I0113 20:39:37.185709 3378 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:39:37.202934 kubelet[3378]: I0113 20:39:37.201659 3378 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:39:37.202934 kubelet[3378]: I0113 20:39:37.201878 3378 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:39:37.207279 kubelet[3378]: I0113 20:39:37.207256 3378 server.go:1264] "Started kubelet" Jan 13 20:39:37.210656 kubelet[3378]: I0113 20:39:37.210564 3378 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:39:37.212041 kubelet[3378]: I0113 20:39:37.211980 3378 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:39:37.213099 kubelet[3378]: I0113 20:39:37.213082 3378 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:39:37.224659 kubelet[3378]: I0113 20:39:37.224331 3378 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:39:37.228671 kubelet[3378]: I0113 20:39:37.226514 3378 server.go:455] "Adding debug handlers to kubelet server" Jan 13 20:39:37.233413 kubelet[3378]: I0113 20:39:37.232044 3378 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:39:37.237141 sudo[3391]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 13 20:39:37.237571 sudo[3391]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 13 20:39:37.241470 kubelet[3378]: I0113 20:39:37.240417 3378 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 20:39:37.241470 kubelet[3378]: I0113 20:39:37.240948 3378 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:39:37.249829 kubelet[3378]: I0113 20:39:37.245452 3378 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:39:37.249829 kubelet[3378]: I0113 20:39:37.245626 3378 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:39:37.256907 kubelet[3378]: I0113 20:39:37.256863 3378 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:39:37.268307 kubelet[3378]: I0113 20:39:37.268262 3378 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:39:37.271244 kubelet[3378]: I0113 20:39:37.271177 3378 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:39:37.271479 kubelet[3378]: I0113 20:39:37.271466 3378 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:39:37.271656 kubelet[3378]: I0113 20:39:37.271645 3378 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 20:39:37.271789 kubelet[3378]: E0113 20:39:37.271771 3378 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:39:37.348132 kubelet[3378]: I0113 20:39:37.348028 3378 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-52" Jan 13 20:39:37.363444 kubelet[3378]: I0113 20:39:37.363289 3378 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:39:37.363444 kubelet[3378]: I0113 20:39:37.363310 3378 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:39:37.363444 kubelet[3378]: I0113 20:39:37.363335 3378 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:39:37.366240 kubelet[3378]: I0113 20:39:37.363950 3378 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:39:37.366240 kubelet[3378]: I0113 20:39:37.363968 3378 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:39:37.366240 kubelet[3378]: I0113 20:39:37.363995 3378 policy_none.go:49] "None policy: Start" Jan 13 20:39:37.368528 kubelet[3378]: I0113 20:39:37.367509 3378 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:39:37.368528 kubelet[3378]: I0113 20:39:37.367539 3378 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:39:37.368528 kubelet[3378]: I0113 20:39:37.367730 3378 state_mem.go:75] "Updated machine memory state" Jan 13 20:39:37.370941 kubelet[3378]: I0113 20:39:37.370911 3378 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-21-52" Jan 13 20:39:37.371647 kubelet[3378]: I0113 20:39:37.371142 3378 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-21-52" Jan 13 20:39:37.373657 kubelet[3378]: E0113 20:39:37.371904 3378 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 20:39:37.395032 kubelet[3378]: I0113 20:39:37.394857 3378 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:39:37.401129 kubelet[3378]: I0113 20:39:37.400925 3378 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:39:37.401629 kubelet[3378]: I0113 20:39:37.401491 3378 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:39:37.573097 kubelet[3378]: I0113 20:39:37.572487 3378 topology_manager.go:215] "Topology Admit Handler" podUID="7396c0d409f8919262ce0e94a06b6246" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-21-52" Jan 13 20:39:37.573097 kubelet[3378]: I0113 20:39:37.572636 3378 topology_manager.go:215] "Topology Admit Handler" podUID="a5b139e13870220e11acb107194a98fc" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-21-52" Jan 13 20:39:37.573097 kubelet[3378]: I0113 20:39:37.572682 3378 topology_manager.go:215] "Topology Admit Handler" podUID="0a547c8de6336b872fdf8ba5963d9497" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-21-52" Jan 13 20:39:37.643531 kubelet[3378]: I0113 20:39:37.643335 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7396c0d409f8919262ce0e94a06b6246-ca-certs\") pod \"kube-apiserver-ip-172-31-21-52\" (UID: \"7396c0d409f8919262ce0e94a06b6246\") " pod="kube-system/kube-apiserver-ip-172-31-21-52" Jan 13 20:39:37.744779 kubelet[3378]: I0113 20:39:37.744345 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a5b139e13870220e11acb107194a98fc-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-52\" (UID: \"a5b139e13870220e11acb107194a98fc\") " pod="kube-system/kube-controller-manager-ip-172-31-21-52" Jan 13 20:39:37.744779 kubelet[3378]: I0113 20:39:37.744400 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a547c8de6336b872fdf8ba5963d9497-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-52\" (UID: \"0a547c8de6336b872fdf8ba5963d9497\") " pod="kube-system/kube-scheduler-ip-172-31-21-52" Jan 13 20:39:37.744779 kubelet[3378]: I0113 20:39:37.744428 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a5b139e13870220e11acb107194a98fc-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-52\" (UID: \"a5b139e13870220e11acb107194a98fc\") " pod="kube-system/kube-controller-manager-ip-172-31-21-52" Jan 13 20:39:37.744779 kubelet[3378]: I0113 20:39:37.744452 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a5b139e13870220e11acb107194a98fc-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-52\" (UID: \"a5b139e13870220e11acb107194a98fc\") " pod="kube-system/kube-controller-manager-ip-172-31-21-52" Jan 13 20:39:37.745386 kubelet[3378]: I0113 20:39:37.744479 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a5b139e13870220e11acb107194a98fc-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-52\" (UID: \"a5b139e13870220e11acb107194a98fc\") " pod="kube-system/kube-controller-manager-ip-172-31-21-52" Jan 13 20:39:37.745386 kubelet[3378]: I0113 20:39:37.745200 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7396c0d409f8919262ce0e94a06b6246-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-52\" (UID: \"7396c0d409f8919262ce0e94a06b6246\") " pod="kube-system/kube-apiserver-ip-172-31-21-52" Jan 13 20:39:37.745386 kubelet[3378]: I0113 20:39:37.745275 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7396c0d409f8919262ce0e94a06b6246-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-52\" (UID: \"7396c0d409f8919262ce0e94a06b6246\") " pod="kube-system/kube-apiserver-ip-172-31-21-52" Jan 13 20:39:37.745643 kubelet[3378]: I0113 20:39:37.745301 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a5b139e13870220e11acb107194a98fc-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-52\" (UID: \"a5b139e13870220e11acb107194a98fc\") " pod="kube-system/kube-controller-manager-ip-172-31-21-52" Jan 13 20:39:38.057378 sudo[3391]: pam_unix(sudo:session): session closed for user root Jan 13 20:39:38.188347 kubelet[3378]: I0113 20:39:38.188141 3378 apiserver.go:52] "Watching apiserver" Jan 13 20:39:38.241861 kubelet[3378]: I0113 20:39:38.241820 3378 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 20:39:38.362151 kubelet[3378]: I0113 20:39:38.361791 3378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-21-52" podStartSLOduration=1.361678844 podStartE2EDuration="1.361678844s" podCreationTimestamp="2025-01-13 20:39:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:39:38.345187507 +0000 UTC m=+1.290111974" watchObservedRunningTime="2025-01-13 20:39:38.361678844 +0000 UTC m=+1.306603312" Jan 13 20:39:38.386782 kubelet[3378]: I0113 20:39:38.386721 3378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-21-52" podStartSLOduration=1.386698526 podStartE2EDuration="1.386698526s" podCreationTimestamp="2025-01-13 20:39:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:39:38.362774978 +0000 UTC m=+1.307699445" watchObservedRunningTime="2025-01-13 20:39:38.386698526 +0000 UTC m=+1.331622982" Jan 13 20:39:40.521945 sudo[2215]: pam_unix(sudo:session): session closed for user root Jan 13 20:39:40.545091 sshd[2214]: Connection closed by 139.178.89.65 port 40672 Jan 13 20:39:40.546376 sshd-session[2212]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:40.555218 systemd-logind[1872]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:39:40.555973 systemd[1]: sshd@6-172.31.21.52:22-139.178.89.65:40672.service: Deactivated successfully. Jan 13 20:39:40.566151 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:39:40.566451 systemd[1]: session-7.scope: Consumed 5.955s CPU time, 186.3M memory peak, 0B memory swap peak. Jan 13 20:39:40.575661 systemd-logind[1872]: Removed session 7. Jan 13 20:39:41.239138 kubelet[3378]: I0113 20:39:41.239082 3378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-21-52" podStartSLOduration=4.23906412 podStartE2EDuration="4.23906412s" podCreationTimestamp="2025-01-13 20:39:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:39:38.387462069 +0000 UTC m=+1.332386536" watchObservedRunningTime="2025-01-13 20:39:41.23906412 +0000 UTC m=+4.183988583" Jan 13 20:39:49.452687 kubelet[3378]: I0113 20:39:49.452640 3378 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:39:49.457979 containerd[1892]: time="2025-01-13T20:39:49.457713025Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:39:49.459165 kubelet[3378]: I0113 20:39:49.458819 3378 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:39:50.381531 kubelet[3378]: I0113 20:39:50.381484 3378 topology_manager.go:215] "Topology Admit Handler" podUID="ac606504-fc05-4200-853c-2d28f6d3f1de" podNamespace="kube-system" podName="cilium-wrjtg" Jan 13 20:39:50.381837 kubelet[3378]: I0113 20:39:50.381813 3378 topology_manager.go:215] "Topology Admit Handler" podUID="7e278539-7d99-406f-943e-6be688e3583d" podNamespace="kube-system" podName="kube-proxy-cnjnf" Jan 13 20:39:50.416175 systemd[1]: Created slice kubepods-besteffort-pod7e278539_7d99_406f_943e_6be688e3583d.slice - libcontainer container kubepods-besteffort-pod7e278539_7d99_406f_943e_6be688e3583d.slice. Jan 13 20:39:50.453928 systemd[1]: Created slice kubepods-burstable-podac606504_fc05_4200_853c_2d28f6d3f1de.slice - libcontainer container kubepods-burstable-podac606504_fc05_4200_853c_2d28f6d3f1de.slice. Jan 13 20:39:50.487742 kubelet[3378]: I0113 20:39:50.486198 3378 topology_manager.go:215] "Topology Admit Handler" podUID="ca79dc98-bdfa-47a3-8064-a0e6c9a68bec" podNamespace="kube-system" podName="cilium-operator-599987898-6jhgh" Jan 13 20:39:50.496178 systemd[1]: Created slice kubepods-besteffort-podca79dc98_bdfa_47a3_8064_a0e6c9a68bec.slice - libcontainer container kubepods-besteffort-podca79dc98_bdfa_47a3_8064_a0e6c9a68bec.slice. Jan 13 20:39:50.536741 kubelet[3378]: I0113 20:39:50.536682 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ac606504-fc05-4200-853c-2d28f6d3f1de-hostproc\") pod \"cilium-wrjtg\" (UID: \"ac606504-fc05-4200-853c-2d28f6d3f1de\") " pod="kube-system/cilium-wrjtg" Jan 13 20:39:50.536921 kubelet[3378]: I0113 20:39:50.536752 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ac606504-fc05-4200-853c-2d28f6d3f1de-host-proc-sys-net\") pod \"cilium-wrjtg\" (UID: \"ac606504-fc05-4200-853c-2d28f6d3f1de\") " pod="kube-system/cilium-wrjtg" Jan 13 20:39:50.536921 kubelet[3378]: I0113 20:39:50.536781 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8vbt\" (UniqueName: \"kubernetes.io/projected/7e278539-7d99-406f-943e-6be688e3583d-kube-api-access-w8vbt\") pod \"kube-proxy-cnjnf\" (UID: \"7e278539-7d99-406f-943e-6be688e3583d\") " pod="kube-system/kube-proxy-cnjnf" Jan 13 20:39:50.536921 kubelet[3378]: I0113 20:39:50.536814 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ac606504-fc05-4200-853c-2d28f6d3f1de-cilium-run\") pod \"cilium-wrjtg\" (UID: \"ac606504-fc05-4200-853c-2d28f6d3f1de\") " pod="kube-system/cilium-wrjtg" Jan 13 20:39:50.536921 kubelet[3378]: I0113 20:39:50.536835 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e278539-7d99-406f-943e-6be688e3583d-lib-modules\") pod \"kube-proxy-cnjnf\" (UID: \"7e278539-7d99-406f-943e-6be688e3583d\") " pod="kube-system/kube-proxy-cnjnf" Jan 13 20:39:50.536921 kubelet[3378]: I0113 20:39:50.536859 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ac606504-fc05-4200-853c-2d28f6d3f1de-cilium-config-path\") pod \"cilium-wrjtg\" (UID: \"ac606504-fc05-4200-853c-2d28f6d3f1de\") " pod="kube-system/cilium-wrjtg" Jan 13 20:39:50.537208 kubelet[3378]: I0113 20:39:50.536883 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ac606504-fc05-4200-853c-2d28f6d3f1de-host-proc-sys-kernel\") pod \"cilium-wrjtg\" (UID: \"ac606504-fc05-4200-853c-2d28f6d3f1de\") " pod="kube-system/cilium-wrjtg" Jan 13 20:39:50.537208 kubelet[3378]: I0113 20:39:50.536905 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ac606504-fc05-4200-853c-2d28f6d3f1de-cilium-cgroup\") pod \"cilium-wrjtg\" (UID: \"ac606504-fc05-4200-853c-2d28f6d3f1de\") " pod="kube-system/cilium-wrjtg" Jan 13 20:39:50.537208 kubelet[3378]: I0113 20:39:50.536929 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac606504-fc05-4200-853c-2d28f6d3f1de-lib-modules\") pod \"cilium-wrjtg\" (UID: \"ac606504-fc05-4200-853c-2d28f6d3f1de\") " pod="kube-system/cilium-wrjtg" Jan 13 20:39:50.537208 kubelet[3378]: I0113 20:39:50.536956 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ac606504-fc05-4200-853c-2d28f6d3f1de-hubble-tls\") pod \"cilium-wrjtg\" (UID: \"ac606504-fc05-4200-853c-2d28f6d3f1de\") " pod="kube-system/cilium-wrjtg" Jan 13 20:39:50.537208 kubelet[3378]: I0113 20:39:50.536981 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ac606504-fc05-4200-853c-2d28f6d3f1de-bpf-maps\") pod \"cilium-wrjtg\" (UID: \"ac606504-fc05-4200-853c-2d28f6d3f1de\") " pod="kube-system/cilium-wrjtg" Jan 13 20:39:50.537208 kubelet[3378]: I0113 20:39:50.537006 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac606504-fc05-4200-853c-2d28f6d3f1de-xtables-lock\") pod \"cilium-wrjtg\" (UID: \"ac606504-fc05-4200-853c-2d28f6d3f1de\") " pod="kube-system/cilium-wrjtg" Jan 13 20:39:50.537446 kubelet[3378]: I0113 20:39:50.537029 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e278539-7d99-406f-943e-6be688e3583d-xtables-lock\") pod \"kube-proxy-cnjnf\" (UID: \"7e278539-7d99-406f-943e-6be688e3583d\") " pod="kube-system/kube-proxy-cnjnf" Jan 13 20:39:50.537446 kubelet[3378]: I0113 20:39:50.537052 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ac606504-fc05-4200-853c-2d28f6d3f1de-clustermesh-secrets\") pod \"cilium-wrjtg\" (UID: \"ac606504-fc05-4200-853c-2d28f6d3f1de\") " pod="kube-system/cilium-wrjtg" Jan 13 20:39:50.537446 kubelet[3378]: I0113 20:39:50.537077 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzdlb\" (UniqueName: \"kubernetes.io/projected/ac606504-fc05-4200-853c-2d28f6d3f1de-kube-api-access-jzdlb\") pod \"cilium-wrjtg\" (UID: \"ac606504-fc05-4200-853c-2d28f6d3f1de\") " pod="kube-system/cilium-wrjtg" Jan 13 20:39:50.537446 kubelet[3378]: I0113 20:39:50.537110 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7e278539-7d99-406f-943e-6be688e3583d-kube-proxy\") pod \"kube-proxy-cnjnf\" (UID: \"7e278539-7d99-406f-943e-6be688e3583d\") " pod="kube-system/kube-proxy-cnjnf" Jan 13 20:39:50.537446 kubelet[3378]: I0113 20:39:50.537197 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ac606504-fc05-4200-853c-2d28f6d3f1de-cni-path\") pod \"cilium-wrjtg\" (UID: \"ac606504-fc05-4200-853c-2d28f6d3f1de\") " pod="kube-system/cilium-wrjtg" Jan 13 20:39:50.537446 kubelet[3378]: I0113 20:39:50.537220 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ac606504-fc05-4200-853c-2d28f6d3f1de-etc-cni-netd\") pod \"cilium-wrjtg\" (UID: \"ac606504-fc05-4200-853c-2d28f6d3f1de\") " pod="kube-system/cilium-wrjtg" Jan 13 20:39:50.638947 kubelet[3378]: I0113 20:39:50.637800 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ca79dc98-bdfa-47a3-8064-a0e6c9a68bec-cilium-config-path\") pod \"cilium-operator-599987898-6jhgh\" (UID: \"ca79dc98-bdfa-47a3-8064-a0e6c9a68bec\") " pod="kube-system/cilium-operator-599987898-6jhgh" Jan 13 20:39:50.638947 kubelet[3378]: I0113 20:39:50.637920 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qvnv\" (UniqueName: \"kubernetes.io/projected/ca79dc98-bdfa-47a3-8064-a0e6c9a68bec-kube-api-access-9qvnv\") pod \"cilium-operator-599987898-6jhgh\" (UID: \"ca79dc98-bdfa-47a3-8064-a0e6c9a68bec\") " pod="kube-system/cilium-operator-599987898-6jhgh" Jan 13 20:39:50.741357 containerd[1892]: time="2025-01-13T20:39:50.741303735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cnjnf,Uid:7e278539-7d99-406f-943e-6be688e3583d,Namespace:kube-system,Attempt:0,}" Jan 13 20:39:50.762948 containerd[1892]: time="2025-01-13T20:39:50.762110730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wrjtg,Uid:ac606504-fc05-4200-853c-2d28f6d3f1de,Namespace:kube-system,Attempt:0,}" Jan 13 20:39:50.803490 containerd[1892]: time="2025-01-13T20:39:50.803446062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-6jhgh,Uid:ca79dc98-bdfa-47a3-8064-a0e6c9a68bec,Namespace:kube-system,Attempt:0,}" Jan 13 20:39:50.827970 containerd[1892]: time="2025-01-13T20:39:50.827872604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:39:50.827970 containerd[1892]: time="2025-01-13T20:39:50.827942040Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:39:50.828658 containerd[1892]: time="2025-01-13T20:39:50.828171399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:39:50.828658 containerd[1892]: time="2025-01-13T20:39:50.828288777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:39:50.834522 containerd[1892]: time="2025-01-13T20:39:50.834094964Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:39:50.834522 containerd[1892]: time="2025-01-13T20:39:50.834163727Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:39:50.834522 containerd[1892]: time="2025-01-13T20:39:50.834188509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:39:50.834522 containerd[1892]: time="2025-01-13T20:39:50.834287870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:39:50.876339 systemd[1]: Started cri-containerd-d5c04950f7d771d4620cfd0786f4d5822806b0375ded6b6143440f8171f779de.scope - libcontainer container d5c04950f7d771d4620cfd0786f4d5822806b0375ded6b6143440f8171f779de. Jan 13 20:39:50.888645 containerd[1892]: time="2025-01-13T20:39:50.885183240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:39:50.888645 containerd[1892]: time="2025-01-13T20:39:50.885261923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:39:50.888645 containerd[1892]: time="2025-01-13T20:39:50.885282254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:39:50.888645 containerd[1892]: time="2025-01-13T20:39:50.886279340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:39:50.888299 systemd[1]: Started cri-containerd-60fd22483cc42475f05752993e1370d385d5a27dca6800b2b2a985ef29aa9f5f.scope - libcontainer container 60fd22483cc42475f05752993e1370d385d5a27dca6800b2b2a985ef29aa9f5f. Jan 13 20:39:50.940732 systemd[1]: Started cri-containerd-3c28a652cd8ab5a5116e2d972514ba4b16e705c7450d26f8f655d6c598bd5c67.scope - libcontainer container 3c28a652cd8ab5a5116e2d972514ba4b16e705c7450d26f8f655d6c598bd5c67. Jan 13 20:39:50.956047 containerd[1892]: time="2025-01-13T20:39:50.956006038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wrjtg,Uid:ac606504-fc05-4200-853c-2d28f6d3f1de,Namespace:kube-system,Attempt:0,} returns sandbox id \"60fd22483cc42475f05752993e1370d385d5a27dca6800b2b2a985ef29aa9f5f\"" Jan 13 20:39:50.960829 containerd[1892]: time="2025-01-13T20:39:50.960748678Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 20:39:50.993096 containerd[1892]: time="2025-01-13T20:39:50.993038131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cnjnf,Uid:7e278539-7d99-406f-943e-6be688e3583d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5c04950f7d771d4620cfd0786f4d5822806b0375ded6b6143440f8171f779de\"" Jan 13 20:39:50.999557 containerd[1892]: time="2025-01-13T20:39:50.999514531Z" level=info msg="CreateContainer within sandbox \"d5c04950f7d771d4620cfd0786f4d5822806b0375ded6b6143440f8171f779de\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:39:51.045025 containerd[1892]: time="2025-01-13T20:39:51.044971584Z" level=info msg="CreateContainer within sandbox \"d5c04950f7d771d4620cfd0786f4d5822806b0375ded6b6143440f8171f779de\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"419ef33f9dc3f6ee4e4efe9881fb2fc7f5775e665e4c97cfa952046cfbb696ed\"" Jan 13 20:39:51.049030 containerd[1892]: time="2025-01-13T20:39:51.048984816Z" level=info msg="StartContainer for \"419ef33f9dc3f6ee4e4efe9881fb2fc7f5775e665e4c97cfa952046cfbb696ed\"" Jan 13 20:39:51.082951 containerd[1892]: time="2025-01-13T20:39:51.082904973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-6jhgh,Uid:ca79dc98-bdfa-47a3-8064-a0e6c9a68bec,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c28a652cd8ab5a5116e2d972514ba4b16e705c7450d26f8f655d6c598bd5c67\"" Jan 13 20:39:51.109920 systemd[1]: Started cri-containerd-419ef33f9dc3f6ee4e4efe9881fb2fc7f5775e665e4c97cfa952046cfbb696ed.scope - libcontainer container 419ef33f9dc3f6ee4e4efe9881fb2fc7f5775e665e4c97cfa952046cfbb696ed. Jan 13 20:39:51.204374 containerd[1892]: time="2025-01-13T20:39:51.204244776Z" level=info msg="StartContainer for \"419ef33f9dc3f6ee4e4efe9881fb2fc7f5775e665e4c97cfa952046cfbb696ed\" returns successfully" Jan 13 20:39:51.363756 kubelet[3378]: I0113 20:39:51.363635 3378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cnjnf" podStartSLOduration=1.3636048889999999 podStartE2EDuration="1.363604889s" podCreationTimestamp="2025-01-13 20:39:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:39:51.3631409 +0000 UTC m=+14.308065368" watchObservedRunningTime="2025-01-13 20:39:51.363604889 +0000 UTC m=+14.308529360" Jan 13 20:40:06.595517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3768383637.mount: Deactivated successfully. Jan 13 20:40:10.659531 containerd[1892]: time="2025-01-13T20:40:10.659475631Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:40:10.662635 containerd[1892]: time="2025-01-13T20:40:10.662561233Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166734703" Jan 13 20:40:10.664383 containerd[1892]: time="2025-01-13T20:40:10.664091236Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:40:10.665764 containerd[1892]: time="2025-01-13T20:40:10.665667659Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 19.702518412s" Jan 13 20:40:10.665899 containerd[1892]: time="2025-01-13T20:40:10.665877470Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 13 20:40:10.670423 containerd[1892]: time="2025-01-13T20:40:10.669971048Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 20:40:10.673485 containerd[1892]: time="2025-01-13T20:40:10.673362474Z" level=info msg="CreateContainer within sandbox \"60fd22483cc42475f05752993e1370d385d5a27dca6800b2b2a985ef29aa9f5f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:40:10.798819 containerd[1892]: time="2025-01-13T20:40:10.798750948Z" level=info msg="CreateContainer within sandbox \"60fd22483cc42475f05752993e1370d385d5a27dca6800b2b2a985ef29aa9f5f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ea4ec0b13beac90330b07926771a56fc694f5d615d396d928b3001d536e85a37\"" Jan 13 20:40:10.800656 containerd[1892]: time="2025-01-13T20:40:10.799894132Z" level=info msg="StartContainer for \"ea4ec0b13beac90330b07926771a56fc694f5d615d396d928b3001d536e85a37\"" Jan 13 20:40:10.968286 systemd[1]: Started cri-containerd-ea4ec0b13beac90330b07926771a56fc694f5d615d396d928b3001d536e85a37.scope - libcontainer container ea4ec0b13beac90330b07926771a56fc694f5d615d396d928b3001d536e85a37. Jan 13 20:40:11.021396 containerd[1892]: time="2025-01-13T20:40:11.021259617Z" level=info msg="StartContainer for \"ea4ec0b13beac90330b07926771a56fc694f5d615d396d928b3001d536e85a37\" returns successfully" Jan 13 20:40:11.043080 systemd[1]: cri-containerd-ea4ec0b13beac90330b07926771a56fc694f5d615d396d928b3001d536e85a37.scope: Deactivated successfully. Jan 13 20:40:11.607862 containerd[1892]: time="2025-01-13T20:40:11.599252921Z" level=info msg="shim disconnected" id=ea4ec0b13beac90330b07926771a56fc694f5d615d396d928b3001d536e85a37 namespace=k8s.io Jan 13 20:40:11.608226 containerd[1892]: time="2025-01-13T20:40:11.607866554Z" level=warning msg="cleaning up after shim disconnected" id=ea4ec0b13beac90330b07926771a56fc694f5d615d396d928b3001d536e85a37 namespace=k8s.io Jan 13 20:40:11.608226 containerd[1892]: time="2025-01-13T20:40:11.607885775Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:40:11.787296 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea4ec0b13beac90330b07926771a56fc694f5d615d396d928b3001d536e85a37-rootfs.mount: Deactivated successfully. Jan 13 20:40:11.815778 containerd[1892]: time="2025-01-13T20:40:11.815727102Z" level=info msg="CreateContainer within sandbox \"60fd22483cc42475f05752993e1370d385d5a27dca6800b2b2a985ef29aa9f5f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:40:11.838499 containerd[1892]: time="2025-01-13T20:40:11.838445828Z" level=info msg="CreateContainer within sandbox \"60fd22483cc42475f05752993e1370d385d5a27dca6800b2b2a985ef29aa9f5f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"49612f51fc2eb4f770b75670601e5fbd72edd673e8b3dbc7c608d80b6243aaef\"" Jan 13 20:40:11.842305 containerd[1892]: time="2025-01-13T20:40:11.842259653Z" level=info msg="StartContainer for \"49612f51fc2eb4f770b75670601e5fbd72edd673e8b3dbc7c608d80b6243aaef\"" Jan 13 20:40:11.885802 systemd[1]: Started cri-containerd-49612f51fc2eb4f770b75670601e5fbd72edd673e8b3dbc7c608d80b6243aaef.scope - libcontainer container 49612f51fc2eb4f770b75670601e5fbd72edd673e8b3dbc7c608d80b6243aaef. Jan 13 20:40:11.955242 containerd[1892]: time="2025-01-13T20:40:11.955074544Z" level=info msg="StartContainer for \"49612f51fc2eb4f770b75670601e5fbd72edd673e8b3dbc7c608d80b6243aaef\" returns successfully" Jan 13 20:40:11.965801 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:40:11.966435 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:40:11.966768 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:40:11.980419 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:40:11.980838 systemd[1]: cri-containerd-49612f51fc2eb4f770b75670601e5fbd72edd673e8b3dbc7c608d80b6243aaef.scope: Deactivated successfully. Jan 13 20:40:12.039243 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49612f51fc2eb4f770b75670601e5fbd72edd673e8b3dbc7c608d80b6243aaef-rootfs.mount: Deactivated successfully. Jan 13 20:40:12.053562 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:40:12.056370 containerd[1892]: time="2025-01-13T20:40:12.056255401Z" level=info msg="shim disconnected" id=49612f51fc2eb4f770b75670601e5fbd72edd673e8b3dbc7c608d80b6243aaef namespace=k8s.io Jan 13 20:40:12.056518 containerd[1892]: time="2025-01-13T20:40:12.056367620Z" level=warning msg="cleaning up after shim disconnected" id=49612f51fc2eb4f770b75670601e5fbd72edd673e8b3dbc7c608d80b6243aaef namespace=k8s.io Jan 13 20:40:12.056518 containerd[1892]: time="2025-01-13T20:40:12.056384495Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:40:12.739722 containerd[1892]: time="2025-01-13T20:40:12.739569828Z" level=info msg="CreateContainer within sandbox \"60fd22483cc42475f05752993e1370d385d5a27dca6800b2b2a985ef29aa9f5f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:40:12.819892 containerd[1892]: time="2025-01-13T20:40:12.819799974Z" level=info msg="CreateContainer within sandbox \"60fd22483cc42475f05752993e1370d385d5a27dca6800b2b2a985ef29aa9f5f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b0178885e2bb5a1d3138ddf379da542e433ab77fb26df4853840f01d291d200a\"" Jan 13 20:40:12.822185 containerd[1892]: time="2025-01-13T20:40:12.821643733Z" level=info msg="StartContainer for \"b0178885e2bb5a1d3138ddf379da542e433ab77fb26df4853840f01d291d200a\"" Jan 13 20:40:12.883100 systemd[1]: Started cri-containerd-b0178885e2bb5a1d3138ddf379da542e433ab77fb26df4853840f01d291d200a.scope - libcontainer container b0178885e2bb5a1d3138ddf379da542e433ab77fb26df4853840f01d291d200a. Jan 13 20:40:12.936030 systemd[1]: cri-containerd-b0178885e2bb5a1d3138ddf379da542e433ab77fb26df4853840f01d291d200a.scope: Deactivated successfully. Jan 13 20:40:12.956264 containerd[1892]: time="2025-01-13T20:40:12.939370150Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac606504_fc05_4200_853c_2d28f6d3f1de.slice/cri-containerd-b0178885e2bb5a1d3138ddf379da542e433ab77fb26df4853840f01d291d200a.scope/memory.events\": no such file or directory" Jan 13 20:40:12.960017 containerd[1892]: time="2025-01-13T20:40:12.959974800Z" level=info msg="StartContainer for \"b0178885e2bb5a1d3138ddf379da542e433ab77fb26df4853840f01d291d200a\" returns successfully" Jan 13 20:40:12.985425 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0178885e2bb5a1d3138ddf379da542e433ab77fb26df4853840f01d291d200a-rootfs.mount: Deactivated successfully. Jan 13 20:40:12.994654 containerd[1892]: time="2025-01-13T20:40:12.994415985Z" level=info msg="shim disconnected" id=b0178885e2bb5a1d3138ddf379da542e433ab77fb26df4853840f01d291d200a namespace=k8s.io Jan 13 20:40:12.994654 containerd[1892]: time="2025-01-13T20:40:12.994531043Z" level=warning msg="cleaning up after shim disconnected" id=b0178885e2bb5a1d3138ddf379da542e433ab77fb26df4853840f01d291d200a namespace=k8s.io Jan 13 20:40:12.994654 containerd[1892]: time="2025-01-13T20:40:12.994544189Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:40:13.758144 containerd[1892]: time="2025-01-13T20:40:13.758023234Z" level=info msg="CreateContainer within sandbox \"60fd22483cc42475f05752993e1370d385d5a27dca6800b2b2a985ef29aa9f5f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:40:13.847118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3014015956.mount: Deactivated successfully. Jan 13 20:40:13.866840 containerd[1892]: time="2025-01-13T20:40:13.866790735Z" level=info msg="CreateContainer within sandbox \"60fd22483cc42475f05752993e1370d385d5a27dca6800b2b2a985ef29aa9f5f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"217f2324a1fb14a1b172aba6d086a46c0c6eadb4abda7d3d5c4e90662c995286\"" Jan 13 20:40:13.874684 containerd[1892]: time="2025-01-13T20:40:13.873181795Z" level=info msg="StartContainer for \"217f2324a1fb14a1b172aba6d086a46c0c6eadb4abda7d3d5c4e90662c995286\"" Jan 13 20:40:13.984890 systemd[1]: Started cri-containerd-217f2324a1fb14a1b172aba6d086a46c0c6eadb4abda7d3d5c4e90662c995286.scope - libcontainer container 217f2324a1fb14a1b172aba6d086a46c0c6eadb4abda7d3d5c4e90662c995286. Jan 13 20:40:14.045149 systemd[1]: cri-containerd-217f2324a1fb14a1b172aba6d086a46c0c6eadb4abda7d3d5c4e90662c995286.scope: Deactivated successfully. Jan 13 20:40:14.060437 containerd[1892]: time="2025-01-13T20:40:14.060391432Z" level=info msg="StartContainer for \"217f2324a1fb14a1b172aba6d086a46c0c6eadb4abda7d3d5c4e90662c995286\" returns successfully" Jan 13 20:40:14.087909 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-217f2324a1fb14a1b172aba6d086a46c0c6eadb4abda7d3d5c4e90662c995286-rootfs.mount: Deactivated successfully. Jan 13 20:40:14.105461 containerd[1892]: time="2025-01-13T20:40:14.105393623Z" level=info msg="shim disconnected" id=217f2324a1fb14a1b172aba6d086a46c0c6eadb4abda7d3d5c4e90662c995286 namespace=k8s.io Jan 13 20:40:14.105461 containerd[1892]: time="2025-01-13T20:40:14.105453753Z" level=warning msg="cleaning up after shim disconnected" id=217f2324a1fb14a1b172aba6d086a46c0c6eadb4abda7d3d5c4e90662c995286 namespace=k8s.io Jan 13 20:40:14.105461 containerd[1892]: time="2025-01-13T20:40:14.105464702Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:40:14.801146 containerd[1892]: time="2025-01-13T20:40:14.801064944Z" level=info msg="CreateContainer within sandbox \"60fd22483cc42475f05752993e1370d385d5a27dca6800b2b2a985ef29aa9f5f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:40:14.854404 containerd[1892]: time="2025-01-13T20:40:14.854274955Z" level=info msg="CreateContainer within sandbox \"60fd22483cc42475f05752993e1370d385d5a27dca6800b2b2a985ef29aa9f5f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dfb7589a70617b95604837492d4a76aa41758d4e048a2d7be5d2aff344d959a9\"" Jan 13 20:40:14.856106 containerd[1892]: time="2025-01-13T20:40:14.856070689Z" level=info msg="StartContainer for \"dfb7589a70617b95604837492d4a76aa41758d4e048a2d7be5d2aff344d959a9\"" Jan 13 20:40:14.926183 systemd[1]: run-containerd-runc-k8s.io-dfb7589a70617b95604837492d4a76aa41758d4e048a2d7be5d2aff344d959a9-runc.oQqkGP.mount: Deactivated successfully. Jan 13 20:40:14.938067 systemd[1]: Started cri-containerd-dfb7589a70617b95604837492d4a76aa41758d4e048a2d7be5d2aff344d959a9.scope - libcontainer container dfb7589a70617b95604837492d4a76aa41758d4e048a2d7be5d2aff344d959a9. Jan 13 20:40:15.015662 containerd[1892]: time="2025-01-13T20:40:15.015580082Z" level=info msg="StartContainer for \"dfb7589a70617b95604837492d4a76aa41758d4e048a2d7be5d2aff344d959a9\" returns successfully" Jan 13 20:40:15.048640 containerd[1892]: time="2025-01-13T20:40:15.048562650Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:40:15.053934 containerd[1892]: time="2025-01-13T20:40:15.053718948Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907217" Jan 13 20:40:15.059650 containerd[1892]: time="2025-01-13T20:40:15.056526759Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:40:15.062844 containerd[1892]: time="2025-01-13T20:40:15.062027823Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.392011817s" Jan 13 20:40:15.062844 containerd[1892]: time="2025-01-13T20:40:15.062091408Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 13 20:40:15.087050 containerd[1892]: time="2025-01-13T20:40:15.086996830Z" level=info msg="CreateContainer within sandbox \"3c28a652cd8ab5a5116e2d972514ba4b16e705c7450d26f8f655d6c598bd5c67\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 20:40:15.185197 containerd[1892]: time="2025-01-13T20:40:15.185142486Z" level=info msg="CreateContainer within sandbox \"3c28a652cd8ab5a5116e2d972514ba4b16e705c7450d26f8f655d6c598bd5c67\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"81a3fe64b6daec471f2446b8f426cce00ab54e78945bb01c41798cfa013ee0eb\"" Jan 13 20:40:15.186513 containerd[1892]: time="2025-01-13T20:40:15.186471530Z" level=info msg="StartContainer for \"81a3fe64b6daec471f2446b8f426cce00ab54e78945bb01c41798cfa013ee0eb\"" Jan 13 20:40:15.259959 systemd[1]: Started cri-containerd-81a3fe64b6daec471f2446b8f426cce00ab54e78945bb01c41798cfa013ee0eb.scope - libcontainer container 81a3fe64b6daec471f2446b8f426cce00ab54e78945bb01c41798cfa013ee0eb. Jan 13 20:40:15.344301 kubelet[3378]: I0113 20:40:15.343836 3378 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 20:40:15.416038 kubelet[3378]: I0113 20:40:15.415981 3378 topology_manager.go:215] "Topology Admit Handler" podUID="85eb15e2-ee76-455c-94e0-49fb53449ebb" podNamespace="kube-system" podName="coredns-7db6d8ff4d-tktmh" Jan 13 20:40:15.429817 containerd[1892]: time="2025-01-13T20:40:15.429667131Z" level=info msg="StartContainer for \"81a3fe64b6daec471f2446b8f426cce00ab54e78945bb01c41798cfa013ee0eb\" returns successfully" Jan 13 20:40:15.438844 systemd[1]: Created slice kubepods-burstable-pod85eb15e2_ee76_455c_94e0_49fb53449ebb.slice - libcontainer container kubepods-burstable-pod85eb15e2_ee76_455c_94e0_49fb53449ebb.slice. Jan 13 20:40:15.468280 kubelet[3378]: I0113 20:40:15.467907 3378 topology_manager.go:215] "Topology Admit Handler" podUID="23673e44-4f7a-405e-8f39-f218b881fae1" podNamespace="kube-system" podName="coredns-7db6d8ff4d-v65x8" Jan 13 20:40:15.486356 systemd[1]: Created slice kubepods-burstable-pod23673e44_4f7a_405e_8f39_f218b881fae1.slice - libcontainer container kubepods-burstable-pod23673e44_4f7a_405e_8f39_f218b881fae1.slice. Jan 13 20:40:15.525720 kubelet[3378]: I0113 20:40:15.525670 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85eb15e2-ee76-455c-94e0-49fb53449ebb-config-volume\") pod \"coredns-7db6d8ff4d-tktmh\" (UID: \"85eb15e2-ee76-455c-94e0-49fb53449ebb\") " pod="kube-system/coredns-7db6d8ff4d-tktmh" Jan 13 20:40:15.525881 kubelet[3378]: I0113 20:40:15.525731 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmgf7\" (UniqueName: \"kubernetes.io/projected/23673e44-4f7a-405e-8f39-f218b881fae1-kube-api-access-qmgf7\") pod \"coredns-7db6d8ff4d-v65x8\" (UID: \"23673e44-4f7a-405e-8f39-f218b881fae1\") " pod="kube-system/coredns-7db6d8ff4d-v65x8" Jan 13 20:40:15.525881 kubelet[3378]: I0113 20:40:15.525760 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plw9h\" (UniqueName: \"kubernetes.io/projected/85eb15e2-ee76-455c-94e0-49fb53449ebb-kube-api-access-plw9h\") pod \"coredns-7db6d8ff4d-tktmh\" (UID: \"85eb15e2-ee76-455c-94e0-49fb53449ebb\") " pod="kube-system/coredns-7db6d8ff4d-tktmh" Jan 13 20:40:15.525881 kubelet[3378]: I0113 20:40:15.525783 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23673e44-4f7a-405e-8f39-f218b881fae1-config-volume\") pod \"coredns-7db6d8ff4d-v65x8\" (UID: \"23673e44-4f7a-405e-8f39-f218b881fae1\") " pod="kube-system/coredns-7db6d8ff4d-v65x8" Jan 13 20:40:15.750897 containerd[1892]: time="2025-01-13T20:40:15.750453535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tktmh,Uid:85eb15e2-ee76-455c-94e0-49fb53449ebb,Namespace:kube-system,Attempt:0,}" Jan 13 20:40:15.795942 containerd[1892]: time="2025-01-13T20:40:15.795901446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-v65x8,Uid:23673e44-4f7a-405e-8f39-f218b881fae1,Namespace:kube-system,Attempt:0,}" Jan 13 20:40:15.838072 systemd[1]: run-containerd-runc-k8s.io-dfb7589a70617b95604837492d4a76aa41758d4e048a2d7be5d2aff344d959a9-runc.w44YC7.mount: Deactivated successfully. Jan 13 20:40:15.891250 kubelet[3378]: I0113 20:40:15.887271 3378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wrjtg" podStartSLOduration=6.179290381 podStartE2EDuration="25.887244379s" podCreationTimestamp="2025-01-13 20:39:50 +0000 UTC" firstStartedPulling="2025-01-13 20:39:50.959382182 +0000 UTC m=+13.904306628" lastFinishedPulling="2025-01-13 20:40:10.667336167 +0000 UTC m=+33.612260626" observedRunningTime="2025-01-13 20:40:15.885007099 +0000 UTC m=+38.829931565" watchObservedRunningTime="2025-01-13 20:40:15.887244379 +0000 UTC m=+38.832168848" Jan 13 20:40:15.891250 kubelet[3378]: I0113 20:40:15.890948 3378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-6jhgh" podStartSLOduration=1.90334819 podStartE2EDuration="25.890928153s" podCreationTimestamp="2025-01-13 20:39:50 +0000 UTC" firstStartedPulling="2025-01-13 20:39:51.085378235 +0000 UTC m=+14.030302686" lastFinishedPulling="2025-01-13 20:40:15.072958193 +0000 UTC m=+38.017882649" observedRunningTime="2025-01-13 20:40:15.798581823 +0000 UTC m=+38.743506289" watchObservedRunningTime="2025-01-13 20:40:15.890928153 +0000 UTC m=+38.835852619" Jan 13 20:40:20.276115 systemd-networkd[1739]: cilium_host: Link UP Jan 13 20:40:20.276385 systemd-networkd[1739]: cilium_net: Link UP Jan 13 20:40:20.276919 systemd-networkd[1739]: cilium_net: Gained carrier Jan 13 20:40:20.277203 systemd-networkd[1739]: cilium_host: Gained carrier Jan 13 20:40:20.294404 (udev-worker)[4198]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:40:20.296327 (udev-worker)[4199]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:40:20.424903 systemd-networkd[1739]: cilium_host: Gained IPv6LL Jan 13 20:40:20.735191 systemd-networkd[1739]: cilium_vxlan: Link UP Jan 13 20:40:20.735207 systemd-networkd[1739]: cilium_vxlan: Gained carrier Jan 13 20:40:20.961650 systemd-networkd[1739]: cilium_net: Gained IPv6LL Jan 13 20:40:21.575698 kernel: NET: Registered PF_ALG protocol family Jan 13 20:40:22.370571 (udev-worker)[4209]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:40:22.371515 systemd-networkd[1739]: cilium_vxlan: Gained IPv6LL Jan 13 20:40:22.374079 systemd-networkd[1739]: lxc_health: Link UP Jan 13 20:40:22.383778 systemd-networkd[1739]: lxc_health: Gained carrier Jan 13 20:40:22.945276 systemd-networkd[1739]: lxc7e50f2d52eac: Link UP Jan 13 20:40:22.950840 kernel: eth0: renamed from tmpfe720 Jan 13 20:40:22.961461 systemd-networkd[1739]: lxc7e50f2d52eac: Gained carrier Jan 13 20:40:23.001113 systemd-networkd[1739]: lxce73cbee93c07: Link UP Jan 13 20:40:23.008902 kernel: eth0: renamed from tmp72aba Jan 13 20:40:23.013383 systemd-networkd[1739]: lxce73cbee93c07: Gained carrier Jan 13 20:40:24.096875 systemd-networkd[1739]: lxc7e50f2d52eac: Gained IPv6LL Jan 13 20:40:24.417676 systemd-networkd[1739]: lxc_health: Gained IPv6LL Jan 13 20:40:24.802853 systemd-networkd[1739]: lxce73cbee93c07: Gained IPv6LL Jan 13 20:40:27.127778 ntpd[1866]: Listen normally on 7 cilium_host 192.168.0.184:123 Jan 13 20:40:27.128600 ntpd[1866]: 13 Jan 20:40:27 ntpd[1866]: Listen normally on 7 cilium_host 192.168.0.184:123 Jan 13 20:40:27.128600 ntpd[1866]: 13 Jan 20:40:27 ntpd[1866]: Listen normally on 8 cilium_net [fe80::ac8e:32ff:fe10:989c%4]:123 Jan 13 20:40:27.128600 ntpd[1866]: 13 Jan 20:40:27 ntpd[1866]: Listen normally on 9 cilium_host [fe80::2855:5bff:fe9e:f321%5]:123 Jan 13 20:40:27.128600 ntpd[1866]: 13 Jan 20:40:27 ntpd[1866]: Listen normally on 10 cilium_vxlan [fe80::e457:a6ff:fec8:60b8%6]:123 Jan 13 20:40:27.128600 ntpd[1866]: 13 Jan 20:40:27 ntpd[1866]: Listen normally on 11 lxc_health [fe80::4045:ddff:fea9:634b%8]:123 Jan 13 20:40:27.128600 ntpd[1866]: 13 Jan 20:40:27 ntpd[1866]: Listen normally on 12 lxc7e50f2d52eac [fe80::fccf:3cff:fed8:6f83%10]:123 Jan 13 20:40:27.128600 ntpd[1866]: 13 Jan 20:40:27 ntpd[1866]: Listen normally on 13 lxce73cbee93c07 [fe80::dc30:33ff:fe1a:ae58%12]:123 Jan 13 20:40:27.127879 ntpd[1866]: Listen normally on 8 cilium_net [fe80::ac8e:32ff:fe10:989c%4]:123 Jan 13 20:40:27.127938 ntpd[1866]: Listen normally on 9 cilium_host [fe80::2855:5bff:fe9e:f321%5]:123 Jan 13 20:40:27.127983 ntpd[1866]: Listen normally on 10 cilium_vxlan [fe80::e457:a6ff:fec8:60b8%6]:123 Jan 13 20:40:27.128025 ntpd[1866]: Listen normally on 11 lxc_health [fe80::4045:ddff:fea9:634b%8]:123 Jan 13 20:40:27.128064 ntpd[1866]: Listen normally on 12 lxc7e50f2d52eac [fe80::fccf:3cff:fed8:6f83%10]:123 Jan 13 20:40:27.128104 ntpd[1866]: Listen normally on 13 lxce73cbee93c07 [fe80::dc30:33ff:fe1a:ae58%12]:123 Jan 13 20:40:28.760794 containerd[1892]: time="2025-01-13T20:40:28.758320772Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:40:28.760794 containerd[1892]: time="2025-01-13T20:40:28.760379666Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:40:28.760794 containerd[1892]: time="2025-01-13T20:40:28.760455279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:40:28.761546 containerd[1892]: time="2025-01-13T20:40:28.761068543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:40:28.809718 containerd[1892]: time="2025-01-13T20:40:28.809259696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:40:28.809718 containerd[1892]: time="2025-01-13T20:40:28.809351247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:40:28.809718 containerd[1892]: time="2025-01-13T20:40:28.809373718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:40:28.809718 containerd[1892]: time="2025-01-13T20:40:28.809476781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:40:28.870393 systemd[1]: Started cri-containerd-72ababf5deb5411bac3c3240a9728af07fe64c605699382d4e8f71df603f3019.scope - libcontainer container 72ababf5deb5411bac3c3240a9728af07fe64c605699382d4e8f71df603f3019. Jan 13 20:40:28.903061 systemd[1]: run-containerd-runc-k8s.io-fe720341591440fe8a97b420060512f9069b172fb0a83374ea689c1938df6825-runc.1tww7h.mount: Deactivated successfully. Jan 13 20:40:28.927910 systemd[1]: Started cri-containerd-fe720341591440fe8a97b420060512f9069b172fb0a83374ea689c1938df6825.scope - libcontainer container fe720341591440fe8a97b420060512f9069b172fb0a83374ea689c1938df6825. Jan 13 20:40:29.065001 containerd[1892]: time="2025-01-13T20:40:29.063486758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tktmh,Uid:85eb15e2-ee76-455c-94e0-49fb53449ebb,Namespace:kube-system,Attempt:0,} returns sandbox id \"72ababf5deb5411bac3c3240a9728af07fe64c605699382d4e8f71df603f3019\"" Jan 13 20:40:29.089911 containerd[1892]: time="2025-01-13T20:40:29.089868868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-v65x8,Uid:23673e44-4f7a-405e-8f39-f218b881fae1,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe720341591440fe8a97b420060512f9069b172fb0a83374ea689c1938df6825\"" Jan 13 20:40:29.101554 containerd[1892]: time="2025-01-13T20:40:29.100918574Z" level=info msg="CreateContainer within sandbox \"72ababf5deb5411bac3c3240a9728af07fe64c605699382d4e8f71df603f3019\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:40:29.102188 containerd[1892]: time="2025-01-13T20:40:29.102154218Z" level=info msg="CreateContainer within sandbox \"fe720341591440fe8a97b420060512f9069b172fb0a83374ea689c1938df6825\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:40:29.148245 containerd[1892]: time="2025-01-13T20:40:29.148102780Z" level=info msg="CreateContainer within sandbox \"fe720341591440fe8a97b420060512f9069b172fb0a83374ea689c1938df6825\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d882f169c7066e5a1081f10b82931e7c68446ed4fdecbcf2c50b4b868d806c0c\"" Jan 13 20:40:29.152297 containerd[1892]: time="2025-01-13T20:40:29.149602230Z" level=info msg="StartContainer for \"d882f169c7066e5a1081f10b82931e7c68446ed4fdecbcf2c50b4b868d806c0c\"" Jan 13 20:40:29.155691 containerd[1892]: time="2025-01-13T20:40:29.155365825Z" level=info msg="CreateContainer within sandbox \"72ababf5deb5411bac3c3240a9728af07fe64c605699382d4e8f71df603f3019\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e6726824c2f94e90f08abef3a175a2e7183deb644e7938437d8a6ab37f53c638\"" Jan 13 20:40:29.156678 containerd[1892]: time="2025-01-13T20:40:29.156313261Z" level=info msg="StartContainer for \"e6726824c2f94e90f08abef3a175a2e7183deb644e7938437d8a6ab37f53c638\"" Jan 13 20:40:29.247332 systemd[1]: Started cri-containerd-d882f169c7066e5a1081f10b82931e7c68446ed4fdecbcf2c50b4b868d806c0c.scope - libcontainer container d882f169c7066e5a1081f10b82931e7c68446ed4fdecbcf2c50b4b868d806c0c. Jan 13 20:40:29.257832 systemd[1]: Started cri-containerd-e6726824c2f94e90f08abef3a175a2e7183deb644e7938437d8a6ab37f53c638.scope - libcontainer container e6726824c2f94e90f08abef3a175a2e7183deb644e7938437d8a6ab37f53c638. Jan 13 20:40:29.359718 containerd[1892]: time="2025-01-13T20:40:29.359536092Z" level=info msg="StartContainer for \"d882f169c7066e5a1081f10b82931e7c68446ed4fdecbcf2c50b4b868d806c0c\" returns successfully" Jan 13 20:40:29.359718 containerd[1892]: time="2025-01-13T20:40:29.359692396Z" level=info msg="StartContainer for \"e6726824c2f94e90f08abef3a175a2e7183deb644e7938437d8a6ab37f53c638\" returns successfully" Jan 13 20:40:29.934154 kubelet[3378]: I0113 20:40:29.904471 3378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-tktmh" podStartSLOduration=39.900207886 podStartE2EDuration="39.900207886s" podCreationTimestamp="2025-01-13 20:39:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:40:29.895763603 +0000 UTC m=+52.840688069" watchObservedRunningTime="2025-01-13 20:40:29.900207886 +0000 UTC m=+52.845132353" Jan 13 20:40:29.969747 kubelet[3378]: I0113 20:40:29.969298 3378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-v65x8" podStartSLOduration=39.969277256 podStartE2EDuration="39.969277256s" podCreationTimestamp="2025-01-13 20:39:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:40:29.969151258 +0000 UTC m=+52.914075728" watchObservedRunningTime="2025-01-13 20:40:29.969277256 +0000 UTC m=+52.914201722" Jan 13 20:40:30.060009 systemd[1]: Started sshd@7-172.31.21.52:22-139.178.89.65:57828.service - OpenSSH per-connection server daemon (139.178.89.65:57828). Jan 13 20:40:30.314217 sshd[4732]: Accepted publickey for core from 139.178.89.65 port 57828 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:40:30.318497 sshd-session[4732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:30.323683 systemd-logind[1872]: New session 8 of user core. Jan 13 20:40:30.330832 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:40:31.209191 sshd[4736]: Connection closed by 139.178.89.65 port 57828 Jan 13 20:40:31.210958 sshd-session[4732]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:31.214249 systemd[1]: sshd@7-172.31.21.52:22-139.178.89.65:57828.service: Deactivated successfully. Jan 13 20:40:31.217751 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:40:31.219702 systemd-logind[1872]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:40:31.221017 systemd-logind[1872]: Removed session 8. Jan 13 20:40:36.249597 systemd[1]: Started sshd@8-172.31.21.52:22-139.178.89.65:45366.service - OpenSSH per-connection server daemon (139.178.89.65:45366). Jan 13 20:40:36.467035 sshd[4748]: Accepted publickey for core from 139.178.89.65 port 45366 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:40:36.467738 sshd-session[4748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:36.473394 systemd-logind[1872]: New session 9 of user core. Jan 13 20:40:36.478834 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:40:36.790869 sshd[4750]: Connection closed by 139.178.89.65 port 45366 Jan 13 20:40:36.791591 sshd-session[4748]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:36.797242 systemd[1]: sshd@8-172.31.21.52:22-139.178.89.65:45366.service: Deactivated successfully. Jan 13 20:40:36.802230 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:40:36.805480 systemd-logind[1872]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:40:36.808603 systemd-logind[1872]: Removed session 9. Jan 13 20:40:41.833286 systemd[1]: Started sshd@9-172.31.21.52:22-139.178.89.65:46402.service - OpenSSH per-connection server daemon (139.178.89.65:46402). Jan 13 20:40:42.016764 sshd[4767]: Accepted publickey for core from 139.178.89.65 port 46402 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:40:42.018133 sshd-session[4767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:42.025581 systemd-logind[1872]: New session 10 of user core. Jan 13 20:40:42.029879 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:40:42.281325 sshd[4769]: Connection closed by 139.178.89.65 port 46402 Jan 13 20:40:42.282880 sshd-session[4767]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:42.286675 systemd[1]: sshd@9-172.31.21.52:22-139.178.89.65:46402.service: Deactivated successfully. Jan 13 20:40:42.290120 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:40:42.292461 systemd-logind[1872]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:40:42.294362 systemd-logind[1872]: Removed session 10. Jan 13 20:40:47.328949 systemd[1]: Started sshd@10-172.31.21.52:22-139.178.89.65:46418.service - OpenSSH per-connection server daemon (139.178.89.65:46418). Jan 13 20:40:47.496873 sshd[4781]: Accepted publickey for core from 139.178.89.65 port 46418 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:40:47.498123 sshd-session[4781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:47.504759 systemd-logind[1872]: New session 11 of user core. Jan 13 20:40:47.510804 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:40:47.710193 sshd[4783]: Connection closed by 139.178.89.65 port 46418 Jan 13 20:40:47.710030 sshd-session[4781]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:47.717527 systemd[1]: sshd@10-172.31.21.52:22-139.178.89.65:46418.service: Deactivated successfully. Jan 13 20:40:47.718141 systemd-logind[1872]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:40:47.724255 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:40:47.730514 systemd-logind[1872]: Removed session 11. Jan 13 20:40:52.741981 systemd[1]: Started sshd@11-172.31.21.52:22-139.178.89.65:59534.service - OpenSSH per-connection server daemon (139.178.89.65:59534). Jan 13 20:40:52.935097 sshd[4798]: Accepted publickey for core from 139.178.89.65 port 59534 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:40:52.937734 sshd-session[4798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:52.945919 systemd-logind[1872]: New session 12 of user core. Jan 13 20:40:52.956999 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:40:53.200669 sshd[4800]: Connection closed by 139.178.89.65 port 59534 Jan 13 20:40:53.202813 sshd-session[4798]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:53.208484 systemd-logind[1872]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:40:53.209020 systemd[1]: sshd@11-172.31.21.52:22-139.178.89.65:59534.service: Deactivated successfully. Jan 13 20:40:53.212778 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:40:53.217374 systemd-logind[1872]: Removed session 12. Jan 13 20:40:53.237318 systemd[1]: Started sshd@12-172.31.21.52:22-139.178.89.65:59540.service - OpenSSH per-connection server daemon (139.178.89.65:59540). Jan 13 20:40:53.463164 sshd[4812]: Accepted publickey for core from 139.178.89.65 port 59540 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:40:53.465093 sshd-session[4812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:53.471835 systemd-logind[1872]: New session 13 of user core. Jan 13 20:40:53.476862 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:40:53.835297 sshd[4815]: Connection closed by 139.178.89.65 port 59540 Jan 13 20:40:53.846924 sshd-session[4812]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:53.891135 systemd[1]: sshd@12-172.31.21.52:22-139.178.89.65:59540.service: Deactivated successfully. Jan 13 20:40:53.894884 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:40:53.898837 systemd-logind[1872]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:40:53.903074 systemd[1]: Started sshd@13-172.31.21.52:22-139.178.89.65:59554.service - OpenSSH per-connection server daemon (139.178.89.65:59554). Jan 13 20:40:53.907913 systemd-logind[1872]: Removed session 13. Jan 13 20:40:54.096387 sshd[4824]: Accepted publickey for core from 139.178.89.65 port 59554 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:40:54.100444 sshd-session[4824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:54.108929 systemd-logind[1872]: New session 14 of user core. Jan 13 20:40:54.113818 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:40:54.418017 sshd[4826]: Connection closed by 139.178.89.65 port 59554 Jan 13 20:40:54.422477 sshd-session[4824]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:54.443055 systemd-logind[1872]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:40:54.445279 systemd[1]: sshd@13-172.31.21.52:22-139.178.89.65:59554.service: Deactivated successfully. Jan 13 20:40:54.450256 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:40:54.451905 systemd-logind[1872]: Removed session 14. Jan 13 20:40:59.445458 systemd[1]: Started sshd@14-172.31.21.52:22-139.178.89.65:59560.service - OpenSSH per-connection server daemon (139.178.89.65:59560). Jan 13 20:40:59.654444 sshd[4839]: Accepted publickey for core from 139.178.89.65 port 59560 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:40:59.655235 sshd-session[4839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:59.663221 systemd-logind[1872]: New session 15 of user core. Jan 13 20:40:59.674971 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:40:59.898585 sshd[4841]: Connection closed by 139.178.89.65 port 59560 Jan 13 20:40:59.900417 sshd-session[4839]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:59.905620 systemd-logind[1872]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:40:59.906846 systemd[1]: sshd@14-172.31.21.52:22-139.178.89.65:59560.service: Deactivated successfully. Jan 13 20:40:59.909489 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:40:59.910812 systemd-logind[1872]: Removed session 15. Jan 13 20:41:04.939154 systemd[1]: Started sshd@15-172.31.21.52:22-139.178.89.65:60976.service - OpenSSH per-connection server daemon (139.178.89.65:60976). Jan 13 20:41:05.124981 sshd[4852]: Accepted publickey for core from 139.178.89.65 port 60976 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:41:05.125942 sshd-session[4852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:41:05.136808 systemd-logind[1872]: New session 16 of user core. Jan 13 20:41:05.148147 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:41:05.419454 sshd[4854]: Connection closed by 139.178.89.65 port 60976 Jan 13 20:41:05.421318 sshd-session[4852]: pam_unix(sshd:session): session closed for user core Jan 13 20:41:05.425954 systemd-logind[1872]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:41:05.426849 systemd[1]: sshd@15-172.31.21.52:22-139.178.89.65:60976.service: Deactivated successfully. Jan 13 20:41:05.430412 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:41:05.433063 systemd-logind[1872]: Removed session 16. Jan 13 20:41:10.464977 systemd[1]: Started sshd@16-172.31.21.52:22-139.178.89.65:60984.service - OpenSSH per-connection server daemon (139.178.89.65:60984). Jan 13 20:41:10.647081 sshd[4865]: Accepted publickey for core from 139.178.89.65 port 60984 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:41:10.647863 sshd-session[4865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:41:10.654480 systemd-logind[1872]: New session 17 of user core. Jan 13 20:41:10.660924 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:41:10.946177 sshd[4867]: Connection closed by 139.178.89.65 port 60984 Jan 13 20:41:10.946971 sshd-session[4865]: pam_unix(sshd:session): session closed for user core Jan 13 20:41:10.952901 systemd[1]: sshd@16-172.31.21.52:22-139.178.89.65:60984.service: Deactivated successfully. Jan 13 20:41:10.956024 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:41:10.957073 systemd-logind[1872]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:41:10.958556 systemd-logind[1872]: Removed session 17. Jan 13 20:41:10.988280 systemd[1]: Started sshd@17-172.31.21.52:22-139.178.89.65:32768.service - OpenSSH per-connection server daemon (139.178.89.65:32768). Jan 13 20:41:11.184946 sshd[4878]: Accepted publickey for core from 139.178.89.65 port 32768 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:41:11.187498 sshd-session[4878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:41:11.203458 systemd-logind[1872]: New session 18 of user core. Jan 13 20:41:11.216865 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 20:41:11.942350 sshd[4880]: Connection closed by 139.178.89.65 port 32768 Jan 13 20:41:11.944104 sshd-session[4878]: pam_unix(sshd:session): session closed for user core Jan 13 20:41:11.952062 systemd[1]: sshd@17-172.31.21.52:22-139.178.89.65:32768.service: Deactivated successfully. Jan 13 20:41:11.957237 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 20:41:11.959235 systemd-logind[1872]: Session 18 logged out. Waiting for processes to exit. Jan 13 20:41:11.976249 systemd-logind[1872]: Removed session 18. Jan 13 20:41:11.984362 systemd[1]: Started sshd@18-172.31.21.52:22-139.178.89.65:46906.service - OpenSSH per-connection server daemon (139.178.89.65:46906). Jan 13 20:41:12.232104 sshd[4889]: Accepted publickey for core from 139.178.89.65 port 46906 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:41:12.235405 sshd-session[4889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:41:12.250533 systemd-logind[1872]: New session 19 of user core. Jan 13 20:41:12.256931 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 20:41:14.730140 sshd[4891]: Connection closed by 139.178.89.65 port 46906 Jan 13 20:41:14.733348 sshd-session[4889]: pam_unix(sshd:session): session closed for user core Jan 13 20:41:14.745738 systemd[1]: sshd@18-172.31.21.52:22-139.178.89.65:46906.service: Deactivated successfully. Jan 13 20:41:14.751838 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 20:41:14.762365 systemd-logind[1872]: Session 19 logged out. Waiting for processes to exit. Jan 13 20:41:14.827111 systemd[1]: Started sshd@19-172.31.21.52:22-139.178.89.65:46922.service - OpenSSH per-connection server daemon (139.178.89.65:46922). Jan 13 20:41:14.829072 systemd-logind[1872]: Removed session 19. Jan 13 20:41:15.045014 sshd[4907]: Accepted publickey for core from 139.178.89.65 port 46922 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:41:15.048944 sshd-session[4907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:41:15.067560 systemd-logind[1872]: New session 20 of user core. Jan 13 20:41:15.075051 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 20:41:15.877668 sshd[4909]: Connection closed by 139.178.89.65 port 46922 Jan 13 20:41:15.879541 sshd-session[4907]: pam_unix(sshd:session): session closed for user core Jan 13 20:41:15.913667 systemd[1]: sshd@19-172.31.21.52:22-139.178.89.65:46922.service: Deactivated successfully. Jan 13 20:41:15.921128 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 20:41:15.922278 systemd-logind[1872]: Session 20 logged out. Waiting for processes to exit. Jan 13 20:41:15.931298 systemd[1]: Started sshd@20-172.31.21.52:22-139.178.89.65:46928.service - OpenSSH per-connection server daemon (139.178.89.65:46928). Jan 13 20:41:15.934523 systemd-logind[1872]: Removed session 20. Jan 13 20:41:16.150448 sshd[4918]: Accepted publickey for core from 139.178.89.65 port 46928 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:41:16.153804 sshd-session[4918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:41:16.167991 systemd-logind[1872]: New session 21 of user core. Jan 13 20:41:16.181083 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 20:41:16.455390 sshd[4920]: Connection closed by 139.178.89.65 port 46928 Jan 13 20:41:16.461189 sshd-session[4918]: pam_unix(sshd:session): session closed for user core Jan 13 20:41:16.473963 systemd[1]: sshd@20-172.31.21.52:22-139.178.89.65:46928.service: Deactivated successfully. Jan 13 20:41:16.484386 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 20:41:16.487085 systemd-logind[1872]: Session 21 logged out. Waiting for processes to exit. Jan 13 20:41:16.490547 systemd-logind[1872]: Removed session 21. Jan 13 20:41:21.493484 systemd[1]: Started sshd@21-172.31.21.52:22-139.178.89.65:34760.service - OpenSSH per-connection server daemon (139.178.89.65:34760). Jan 13 20:41:21.686649 sshd[4931]: Accepted publickey for core from 139.178.89.65 port 34760 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:41:21.687832 sshd-session[4931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:41:21.703628 systemd-logind[1872]: New session 22 of user core. Jan 13 20:41:21.711705 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 20:41:22.009656 sshd[4933]: Connection closed by 139.178.89.65 port 34760 Jan 13 20:41:22.007912 sshd-session[4931]: pam_unix(sshd:session): session closed for user core Jan 13 20:41:22.017339 systemd[1]: sshd@21-172.31.21.52:22-139.178.89.65:34760.service: Deactivated successfully. Jan 13 20:41:22.022748 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 20:41:22.025287 systemd-logind[1872]: Session 22 logged out. Waiting for processes to exit. Jan 13 20:41:22.028906 systemd-logind[1872]: Removed session 22. Jan 13 20:41:27.043121 systemd[1]: Started sshd@22-172.31.21.52:22-139.178.89.65:34772.service - OpenSSH per-connection server daemon (139.178.89.65:34772). Jan 13 20:41:27.222833 sshd[4949]: Accepted publickey for core from 139.178.89.65 port 34772 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:41:27.224450 sshd-session[4949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:41:27.244999 systemd-logind[1872]: New session 23 of user core. Jan 13 20:41:27.262950 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 20:41:27.581350 sshd[4951]: Connection closed by 139.178.89.65 port 34772 Jan 13 20:41:27.582165 sshd-session[4949]: pam_unix(sshd:session): session closed for user core Jan 13 20:41:27.588843 systemd-logind[1872]: Session 23 logged out. Waiting for processes to exit. Jan 13 20:41:27.589346 systemd[1]: sshd@22-172.31.21.52:22-139.178.89.65:34772.service: Deactivated successfully. Jan 13 20:41:27.592312 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 20:41:27.594316 systemd-logind[1872]: Removed session 23. Jan 13 20:41:32.622681 systemd[1]: Started sshd@23-172.31.21.52:22-139.178.89.65:50810.service - OpenSSH per-connection server daemon (139.178.89.65:50810). Jan 13 20:41:32.827751 sshd[4961]: Accepted publickey for core from 139.178.89.65 port 50810 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:41:32.829778 sshd-session[4961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:41:32.838702 systemd-logind[1872]: New session 24 of user core. Jan 13 20:41:32.845190 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 20:41:33.093409 sshd[4963]: Connection closed by 139.178.89.65 port 50810 Jan 13 20:41:33.095529 sshd-session[4961]: pam_unix(sshd:session): session closed for user core Jan 13 20:41:33.104169 systemd[1]: sshd@23-172.31.21.52:22-139.178.89.65:50810.service: Deactivated successfully. Jan 13 20:41:33.109890 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 20:41:33.110975 systemd-logind[1872]: Session 24 logged out. Waiting for processes to exit. Jan 13 20:41:33.112445 systemd-logind[1872]: Removed session 24. Jan 13 20:41:38.136065 systemd[1]: Started sshd@24-172.31.21.52:22-139.178.89.65:50822.service - OpenSSH per-connection server daemon (139.178.89.65:50822). Jan 13 20:41:38.327030 sshd[4976]: Accepted publickey for core from 139.178.89.65 port 50822 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:41:38.334103 sshd-session[4976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:41:38.343835 systemd-logind[1872]: New session 25 of user core. Jan 13 20:41:38.353469 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 20:41:38.564720 sshd[4978]: Connection closed by 139.178.89.65 port 50822 Jan 13 20:41:38.566860 sshd-session[4976]: pam_unix(sshd:session): session closed for user core Jan 13 20:41:38.574796 systemd-logind[1872]: Session 25 logged out. Waiting for processes to exit. Jan 13 20:41:38.575995 systemd[1]: sshd@24-172.31.21.52:22-139.178.89.65:50822.service: Deactivated successfully. Jan 13 20:41:38.579153 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 20:41:38.580157 systemd-logind[1872]: Removed session 25. Jan 13 20:41:38.609134 systemd[1]: Started sshd@25-172.31.21.52:22-139.178.89.65:50830.service - OpenSSH per-connection server daemon (139.178.89.65:50830). Jan 13 20:41:38.783254 sshd[4989]: Accepted publickey for core from 139.178.89.65 port 50830 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:41:38.785186 sshd-session[4989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:41:38.796983 systemd-logind[1872]: New session 26 of user core. Jan 13 20:41:38.814945 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 20:41:40.927938 containerd[1892]: time="2025-01-13T20:41:40.927888776Z" level=info msg="StopContainer for \"81a3fe64b6daec471f2446b8f426cce00ab54e78945bb01c41798cfa013ee0eb\" with timeout 30 (s)" Jan 13 20:41:40.935868 containerd[1892]: time="2025-01-13T20:41:40.935583729Z" level=info msg="Stop container \"81a3fe64b6daec471f2446b8f426cce00ab54e78945bb01c41798cfa013ee0eb\" with signal terminated" Jan 13 20:41:40.936406 systemd[1]: run-containerd-runc-k8s.io-dfb7589a70617b95604837492d4a76aa41758d4e048a2d7be5d2aff344d959a9-runc.xqj37t.mount: Deactivated successfully. Jan 13 20:41:40.970789 systemd[1]: cri-containerd-81a3fe64b6daec471f2446b8f426cce00ab54e78945bb01c41798cfa013ee0eb.scope: Deactivated successfully. Jan 13 20:41:40.981563 containerd[1892]: time="2025-01-13T20:41:40.981492211Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:41:40.986496 containerd[1892]: time="2025-01-13T20:41:40.986454973Z" level=info msg="StopContainer for \"dfb7589a70617b95604837492d4a76aa41758d4e048a2d7be5d2aff344d959a9\" with timeout 2 (s)" Jan 13 20:41:40.988871 containerd[1892]: time="2025-01-13T20:41:40.987292564Z" level=info msg="Stop container \"dfb7589a70617b95604837492d4a76aa41758d4e048a2d7be5d2aff344d959a9\" with signal terminated" Jan 13 20:41:41.004015 systemd-networkd[1739]: lxc_health: Link DOWN Jan 13 20:41:41.004032 systemd-networkd[1739]: lxc_health: Lost carrier Jan 13 20:41:41.042377 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81a3fe64b6daec471f2446b8f426cce00ab54e78945bb01c41798cfa013ee0eb-rootfs.mount: Deactivated successfully. Jan 13 20:41:41.049137 systemd[1]: cri-containerd-dfb7589a70617b95604837492d4a76aa41758d4e048a2d7be5d2aff344d959a9.scope: Deactivated successfully. Jan 13 20:41:41.051064 systemd[1]: cri-containerd-dfb7589a70617b95604837492d4a76aa41758d4e048a2d7be5d2aff344d959a9.scope: Consumed 8.714s CPU time. Jan 13 20:41:41.061845 containerd[1892]: time="2025-01-13T20:41:41.061698318Z" level=info msg="shim disconnected" id=81a3fe64b6daec471f2446b8f426cce00ab54e78945bb01c41798cfa013ee0eb namespace=k8s.io Jan 13 20:41:41.062032 containerd[1892]: time="2025-01-13T20:41:41.061863929Z" level=warning msg="cleaning up after shim disconnected" id=81a3fe64b6daec471f2446b8f426cce00ab54e78945bb01c41798cfa013ee0eb namespace=k8s.io Jan 13 20:41:41.062032 containerd[1892]: time="2025-01-13T20:41:41.061877605Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:41:41.088754 containerd[1892]: time="2025-01-13T20:41:41.088706042Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:41:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:41:41.093141 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dfb7589a70617b95604837492d4a76aa41758d4e048a2d7be5d2aff344d959a9-rootfs.mount: Deactivated successfully. Jan 13 20:41:41.098397 containerd[1892]: time="2025-01-13T20:41:41.098348830Z" level=info msg="StopContainer for \"81a3fe64b6daec471f2446b8f426cce00ab54e78945bb01c41798cfa013ee0eb\" returns successfully" Jan 13 20:41:41.099318 containerd[1892]: time="2025-01-13T20:41:41.099282100Z" level=info msg="StopPodSandbox for \"3c28a652cd8ab5a5116e2d972514ba4b16e705c7450d26f8f655d6c598bd5c67\"" Jan 13 20:41:41.112299 containerd[1892]: time="2025-01-13T20:41:41.101647814Z" level=info msg="Container to stop \"81a3fe64b6daec471f2446b8f426cce00ab54e78945bb01c41798cfa013ee0eb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:41:41.115316 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3c28a652cd8ab5a5116e2d972514ba4b16e705c7450d26f8f655d6c598bd5c67-shm.mount: Deactivated successfully. Jan 13 20:41:41.128868 systemd[1]: cri-containerd-3c28a652cd8ab5a5116e2d972514ba4b16e705c7450d26f8f655d6c598bd5c67.scope: Deactivated successfully. Jan 13 20:41:41.177151 containerd[1892]: time="2025-01-13T20:41:41.177073001Z" level=info msg="shim disconnected" id=dfb7589a70617b95604837492d4a76aa41758d4e048a2d7be5d2aff344d959a9 namespace=k8s.io Jan 13 20:41:41.177151 containerd[1892]: time="2025-01-13T20:41:41.177123886Z" level=warning msg="cleaning up after shim disconnected" id=dfb7589a70617b95604837492d4a76aa41758d4e048a2d7be5d2aff344d959a9 namespace=k8s.io Jan 13 20:41:41.177151 containerd[1892]: time="2025-01-13T20:41:41.177136369Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:41:41.178291 containerd[1892]: time="2025-01-13T20:41:41.178075997Z" level=info msg="shim disconnected" id=3c28a652cd8ab5a5116e2d972514ba4b16e705c7450d26f8f655d6c598bd5c67 namespace=k8s.io Jan 13 20:41:41.178291 containerd[1892]: time="2025-01-13T20:41:41.178181331Z" level=warning msg="cleaning up after shim disconnected" id=3c28a652cd8ab5a5116e2d972514ba4b16e705c7450d26f8f655d6c598bd5c67 namespace=k8s.io Jan 13 20:41:41.178291 containerd[1892]: time="2025-01-13T20:41:41.178196147Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:41:41.208028 containerd[1892]: time="2025-01-13T20:41:41.207901662Z" level=info msg="StopContainer for \"dfb7589a70617b95604837492d4a76aa41758d4e048a2d7be5d2aff344d959a9\" returns successfully" Jan 13 20:41:41.208667 containerd[1892]: time="2025-01-13T20:41:41.208473528Z" level=info msg="StopPodSandbox for \"60fd22483cc42475f05752993e1370d385d5a27dca6800b2b2a985ef29aa9f5f\"" Jan 13 20:41:41.208667 containerd[1892]: time="2025-01-13T20:41:41.208513072Z" level=info msg="Container to stop \"ea4ec0b13beac90330b07926771a56fc694f5d615d396d928b3001d536e85a37\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:41:41.208667 containerd[1892]: time="2025-01-13T20:41:41.208572003Z" level=info msg="Container to stop \"dfb7589a70617b95604837492d4a76aa41758d4e048a2d7be5d2aff344d959a9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:41:41.208667 containerd[1892]: time="2025-01-13T20:41:41.208583945Z" level=info msg="Container to stop \"49612f51fc2eb4f770b75670601e5fbd72edd673e8b3dbc7c608d80b6243aaef\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:41:41.208889 containerd[1892]: time="2025-01-13T20:41:41.208597866Z" level=info msg="Container to stop \"b0178885e2bb5a1d3138ddf379da542e433ab77fb26df4853840f01d291d200a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:41:41.208889 containerd[1892]: time="2025-01-13T20:41:41.208702131Z" level=info msg="Container to stop \"217f2324a1fb14a1b172aba6d086a46c0c6eadb4abda7d3d5c4e90662c995286\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:41:41.214943 containerd[1892]: time="2025-01-13T20:41:41.214898393Z" level=info msg="TearDown network for sandbox \"3c28a652cd8ab5a5116e2d972514ba4b16e705c7450d26f8f655d6c598bd5c67\" successfully" Jan 13 20:41:41.214943 containerd[1892]: time="2025-01-13T20:41:41.214935114Z" level=info msg="StopPodSandbox for \"3c28a652cd8ab5a5116e2d972514ba4b16e705c7450d26f8f655d6c598bd5c67\" returns successfully" Jan 13 20:41:41.218623 systemd[1]: cri-containerd-60fd22483cc42475f05752993e1370d385d5a27dca6800b2b2a985ef29aa9f5f.scope: Deactivated successfully. Jan 13 20:41:41.264947 containerd[1892]: time="2025-01-13T20:41:41.264876554Z" level=info msg="shim disconnected" id=60fd22483cc42475f05752993e1370d385d5a27dca6800b2b2a985ef29aa9f5f namespace=k8s.io Jan 13 20:41:41.264947 containerd[1892]: time="2025-01-13T20:41:41.264943903Z" level=warning msg="cleaning up after shim disconnected" id=60fd22483cc42475f05752993e1370d385d5a27dca6800b2b2a985ef29aa9f5f namespace=k8s.io Jan 13 20:41:41.264947 containerd[1892]: time="2025-01-13T20:41:41.264955120Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:41:41.297222 containerd[1892]: time="2025-01-13T20:41:41.297177207Z" level=info msg="TearDown network for sandbox \"60fd22483cc42475f05752993e1370d385d5a27dca6800b2b2a985ef29aa9f5f\" successfully" Jan 13 20:41:41.297222 containerd[1892]: time="2025-01-13T20:41:41.297245231Z" level=info msg="StopPodSandbox for \"60fd22483cc42475f05752993e1370d385d5a27dca6800b2b2a985ef29aa9f5f\" returns successfully" Jan 13 20:41:41.319575 kubelet[3378]: I0113 20:41:41.319532 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ca79dc98-bdfa-47a3-8064-a0e6c9a68bec-cilium-config-path\") pod \"ca79dc98-bdfa-47a3-8064-a0e6c9a68bec\" (UID: \"ca79dc98-bdfa-47a3-8064-a0e6c9a68bec\") " Jan 13 20:41:41.320285 kubelet[3378]: I0113 20:41:41.319647 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qvnv\" (UniqueName: \"kubernetes.io/projected/ca79dc98-bdfa-47a3-8064-a0e6c9a68bec-kube-api-access-9qvnv\") pod \"ca79dc98-bdfa-47a3-8064-a0e6c9a68bec\" (UID: \"ca79dc98-bdfa-47a3-8064-a0e6c9a68bec\") " Jan 13 20:41:41.338077 kubelet[3378]: I0113 20:41:41.335627 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca79dc98-bdfa-47a3-8064-a0e6c9a68bec-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ca79dc98-bdfa-47a3-8064-a0e6c9a68bec" (UID: "ca79dc98-bdfa-47a3-8064-a0e6c9a68bec"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:41:41.352161 kubelet[3378]: I0113 20:41:41.352116 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca79dc98-bdfa-47a3-8064-a0e6c9a68bec-kube-api-access-9qvnv" (OuterVolumeSpecName: "kube-api-access-9qvnv") pod "ca79dc98-bdfa-47a3-8064-a0e6c9a68bec" (UID: "ca79dc98-bdfa-47a3-8064-a0e6c9a68bec"). InnerVolumeSpecName "kube-api-access-9qvnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:41:41.423442 kubelet[3378]: I0113 20:41:41.423395 3378 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ca79dc98-bdfa-47a3-8064-a0e6c9a68bec-cilium-config-path\") on node \"ip-172-31-21-52\" DevicePath \"\"" Jan 13 20:41:41.423442 kubelet[3378]: I0113 20:41:41.423448 3378 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-9qvnv\" (UniqueName: \"kubernetes.io/projected/ca79dc98-bdfa-47a3-8064-a0e6c9a68bec-kube-api-access-9qvnv\") on node \"ip-172-31-21-52\" DevicePath \"\"" Jan 13 20:41:41.527485 kubelet[3378]: I0113 20:41:41.526718 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ac606504-fc05-4200-853c-2d28f6d3f1de-hostproc\") pod \"ac606504-fc05-4200-853c-2d28f6d3f1de\" (UID: \"ac606504-fc05-4200-853c-2d28f6d3f1de\") " Jan 13 20:41:41.527485 kubelet[3378]: I0113 20:41:41.526770 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ac606504-fc05-4200-853c-2d28f6d3f1de-host-proc-sys-net\") pod \"ac606504-fc05-4200-853c-2d28f6d3f1de\" (UID: \"ac606504-fc05-4200-853c-2d28f6d3f1de\") " Jan 13 20:41:41.527485 kubelet[3378]: I0113 20:41:41.526793 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ac606504-fc05-4200-853c-2d28f6d3f1de-cilium-cgroup\") pod \"ac606504-fc05-4200-853c-2d28f6d3f1de\" (UID: \"ac606504-fc05-4200-853c-2d28f6d3f1de\") " Jan 13 20:41:41.527485 kubelet[3378]: I0113 20:41:41.526815 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ac606504-fc05-4200-853c-2d28f6d3f1de-cilium-run\") pod \"ac606504-fc05-4200-853c-2d28f6d3f1de\" (UID: \"ac606504-fc05-4200-853c-2d28f6d3f1de\") " Jan 13 20:41:41.527485 kubelet[3378]: I0113 20:41:41.526815 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac606504-fc05-4200-853c-2d28f6d3f1de-hostproc" (OuterVolumeSpecName: "hostproc") pod "ac606504-fc05-4200-853c-2d28f6d3f1de" (UID: "ac606504-fc05-4200-853c-2d28f6d3f1de"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:41:41.527485 kubelet[3378]: I0113 20:41:41.526837 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ac606504-fc05-4200-853c-2d28f6d3f1de-host-proc-sys-kernel\") pod \"ac606504-fc05-4200-853c-2d28f6d3f1de\" (UID: \"ac606504-fc05-4200-853c-2d28f6d3f1de\") " Jan 13 20:41:41.527974 kubelet[3378]: I0113 20:41:41.526860 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac606504-fc05-4200-853c-2d28f6d3f1de-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ac606504-fc05-4200-853c-2d28f6d3f1de" (UID: "ac606504-fc05-4200-853c-2d28f6d3f1de"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:41:41.527974 kubelet[3378]: I0113 20:41:41.526866 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ac606504-fc05-4200-853c-2d28f6d3f1de-hubble-tls\") pod \"ac606504-fc05-4200-853c-2d28f6d3f1de\" (UID: \"ac606504-fc05-4200-853c-2d28f6d3f1de\") " Jan 13 20:41:41.527974 kubelet[3378]: I0113 20:41:41.526882 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac606504-fc05-4200-853c-2d28f6d3f1de-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ac606504-fc05-4200-853c-2d28f6d3f1de" (UID: "ac606504-fc05-4200-853c-2d28f6d3f1de"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:41:41.527974 kubelet[3378]: I0113 20:41:41.526895 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ac606504-fc05-4200-853c-2d28f6d3f1de-clustermesh-secrets\") pod \"ac606504-fc05-4200-853c-2d28f6d3f1de\" (UID: \"ac606504-fc05-4200-853c-2d28f6d3f1de\") " Jan 13 20:41:41.527974 kubelet[3378]: I0113 20:41:41.526905 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac606504-fc05-4200-853c-2d28f6d3f1de-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ac606504-fc05-4200-853c-2d28f6d3f1de" (UID: "ac606504-fc05-4200-853c-2d28f6d3f1de"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:41:41.528203 kubelet[3378]: I0113 20:41:41.526918 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ac606504-fc05-4200-853c-2d28f6d3f1de-cni-path\") pod \"ac606504-fc05-4200-853c-2d28f6d3f1de\" (UID: \"ac606504-fc05-4200-853c-2d28f6d3f1de\") " Jan 13 20:41:41.528203 kubelet[3378]: I0113 20:41:41.526924 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac606504-fc05-4200-853c-2d28f6d3f1de-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ac606504-fc05-4200-853c-2d28f6d3f1de" (UID: "ac606504-fc05-4200-853c-2d28f6d3f1de"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:41:41.528203 kubelet[3378]: I0113 20:41:41.526941 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ac606504-fc05-4200-853c-2d28f6d3f1de-etc-cni-netd\") pod \"ac606504-fc05-4200-853c-2d28f6d3f1de\" (UID: \"ac606504-fc05-4200-853c-2d28f6d3f1de\") " Jan 13 20:41:41.528203 kubelet[3378]: I0113 20:41:41.526964 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ac606504-fc05-4200-853c-2d28f6d3f1de-bpf-maps\") pod \"ac606504-fc05-4200-853c-2d28f6d3f1de\" (UID: \"ac606504-fc05-4200-853c-2d28f6d3f1de\") " Jan 13 20:41:41.528203 kubelet[3378]: I0113 20:41:41.526984 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac606504-fc05-4200-853c-2d28f6d3f1de-xtables-lock\") pod \"ac606504-fc05-4200-853c-2d28f6d3f1de\" (UID: \"ac606504-fc05-4200-853c-2d28f6d3f1de\") " Jan 13 20:41:41.528203 kubelet[3378]: I0113 20:41:41.527010 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ac606504-fc05-4200-853c-2d28f6d3f1de-cilium-config-path\") pod \"ac606504-fc05-4200-853c-2d28f6d3f1de\" (UID: \"ac606504-fc05-4200-853c-2d28f6d3f1de\") " Jan 13 20:41:41.528687 kubelet[3378]: I0113 20:41:41.527031 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac606504-fc05-4200-853c-2d28f6d3f1de-lib-modules\") pod \"ac606504-fc05-4200-853c-2d28f6d3f1de\" (UID: \"ac606504-fc05-4200-853c-2d28f6d3f1de\") " Jan 13 20:41:41.528687 kubelet[3378]: I0113 20:41:41.527056 3378 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzdlb\" (UniqueName: \"kubernetes.io/projected/ac606504-fc05-4200-853c-2d28f6d3f1de-kube-api-access-jzdlb\") pod \"ac606504-fc05-4200-853c-2d28f6d3f1de\" (UID: \"ac606504-fc05-4200-853c-2d28f6d3f1de\") " Jan 13 20:41:41.528687 kubelet[3378]: I0113 20:41:41.527105 3378 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ac606504-fc05-4200-853c-2d28f6d3f1de-hostproc\") on node \"ip-172-31-21-52\" DevicePath \"\"" Jan 13 20:41:41.528687 kubelet[3378]: I0113 20:41:41.527121 3378 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ac606504-fc05-4200-853c-2d28f6d3f1de-host-proc-sys-net\") on node \"ip-172-31-21-52\" DevicePath \"\"" Jan 13 20:41:41.528687 kubelet[3378]: I0113 20:41:41.527133 3378 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ac606504-fc05-4200-853c-2d28f6d3f1de-cilium-cgroup\") on node \"ip-172-31-21-52\" DevicePath \"\"" Jan 13 20:41:41.528687 kubelet[3378]: I0113 20:41:41.527145 3378 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ac606504-fc05-4200-853c-2d28f6d3f1de-cilium-run\") on node \"ip-172-31-21-52\" DevicePath \"\"" Jan 13 20:41:41.528687 kubelet[3378]: I0113 20:41:41.527158 3378 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ac606504-fc05-4200-853c-2d28f6d3f1de-host-proc-sys-kernel\") on node \"ip-172-31-21-52\" DevicePath \"\"" Jan 13 20:41:41.536906 kubelet[3378]: I0113 20:41:41.535482 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac606504-fc05-4200-853c-2d28f6d3f1de-cni-path" (OuterVolumeSpecName: "cni-path") pod "ac606504-fc05-4200-853c-2d28f6d3f1de" (UID: "ac606504-fc05-4200-853c-2d28f6d3f1de"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:41:41.536906 kubelet[3378]: I0113 20:41:41.535544 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac606504-fc05-4200-853c-2d28f6d3f1de-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ac606504-fc05-4200-853c-2d28f6d3f1de" (UID: "ac606504-fc05-4200-853c-2d28f6d3f1de"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:41:41.536906 kubelet[3378]: I0113 20:41:41.535568 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac606504-fc05-4200-853c-2d28f6d3f1de-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ac606504-fc05-4200-853c-2d28f6d3f1de" (UID: "ac606504-fc05-4200-853c-2d28f6d3f1de"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:41:41.536906 kubelet[3378]: I0113 20:41:41.535587 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac606504-fc05-4200-853c-2d28f6d3f1de-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ac606504-fc05-4200-853c-2d28f6d3f1de" (UID: "ac606504-fc05-4200-853c-2d28f6d3f1de"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:41:41.537383 kubelet[3378]: I0113 20:41:41.536921 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac606504-fc05-4200-853c-2d28f6d3f1de-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ac606504-fc05-4200-853c-2d28f6d3f1de" (UID: "ac606504-fc05-4200-853c-2d28f6d3f1de"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:41:41.540128 kubelet[3378]: I0113 20:41:41.540079 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac606504-fc05-4200-853c-2d28f6d3f1de-kube-api-access-jzdlb" (OuterVolumeSpecName: "kube-api-access-jzdlb") pod "ac606504-fc05-4200-853c-2d28f6d3f1de" (UID: "ac606504-fc05-4200-853c-2d28f6d3f1de"). InnerVolumeSpecName "kube-api-access-jzdlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:41:41.540990 kubelet[3378]: I0113 20:41:41.540955 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac606504-fc05-4200-853c-2d28f6d3f1de-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ac606504-fc05-4200-853c-2d28f6d3f1de" (UID: "ac606504-fc05-4200-853c-2d28f6d3f1de"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 20:41:41.545152 kubelet[3378]: I0113 20:41:41.545110 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac606504-fc05-4200-853c-2d28f6d3f1de-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ac606504-fc05-4200-853c-2d28f6d3f1de" (UID: "ac606504-fc05-4200-853c-2d28f6d3f1de"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:41:41.552172 kubelet[3378]: I0113 20:41:41.552047 3378 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac606504-fc05-4200-853c-2d28f6d3f1de-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ac606504-fc05-4200-853c-2d28f6d3f1de" (UID: "ac606504-fc05-4200-853c-2d28f6d3f1de"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:41:41.627889 kubelet[3378]: I0113 20:41:41.627815 3378 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ac606504-fc05-4200-853c-2d28f6d3f1de-etc-cni-netd\") on node \"ip-172-31-21-52\" DevicePath \"\"" Jan 13 20:41:41.627889 kubelet[3378]: I0113 20:41:41.627890 3378 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ac606504-fc05-4200-853c-2d28f6d3f1de-bpf-maps\") on node \"ip-172-31-21-52\" DevicePath \"\"" Jan 13 20:41:41.659878 kubelet[3378]: I0113 20:41:41.659842 3378 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac606504-fc05-4200-853c-2d28f6d3f1de-xtables-lock\") on node \"ip-172-31-21-52\" DevicePath \"\"" Jan 13 20:41:41.659878 kubelet[3378]: I0113 20:41:41.659878 3378 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac606504-fc05-4200-853c-2d28f6d3f1de-lib-modules\") on node \"ip-172-31-21-52\" DevicePath \"\"" Jan 13 20:41:41.660083 kubelet[3378]: I0113 20:41:41.659891 3378 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-jzdlb\" (UniqueName: \"kubernetes.io/projected/ac606504-fc05-4200-853c-2d28f6d3f1de-kube-api-access-jzdlb\") on node \"ip-172-31-21-52\" DevicePath \"\"" Jan 13 20:41:41.660083 kubelet[3378]: I0113 20:41:41.659907 3378 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ac606504-fc05-4200-853c-2d28f6d3f1de-cilium-config-path\") on node \"ip-172-31-21-52\" DevicePath \"\"" Jan 13 20:41:41.660083 kubelet[3378]: I0113 20:41:41.659921 3378 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ac606504-fc05-4200-853c-2d28f6d3f1de-clustermesh-secrets\") on node \"ip-172-31-21-52\" DevicePath \"\"" Jan 13 20:41:41.660083 kubelet[3378]: I0113 20:41:41.659932 3378 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ac606504-fc05-4200-853c-2d28f6d3f1de-cni-path\") on node \"ip-172-31-21-52\" DevicePath \"\"" Jan 13 20:41:41.660083 kubelet[3378]: I0113 20:41:41.659942 3378 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ac606504-fc05-4200-853c-2d28f6d3f1de-hubble-tls\") on node \"ip-172-31-21-52\" DevicePath \"\"" Jan 13 20:41:41.925693 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c28a652cd8ab5a5116e2d972514ba4b16e705c7450d26f8f655d6c598bd5c67-rootfs.mount: Deactivated successfully. Jan 13 20:41:41.925826 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60fd22483cc42475f05752993e1370d385d5a27dca6800b2b2a985ef29aa9f5f-rootfs.mount: Deactivated successfully. Jan 13 20:41:41.925907 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-60fd22483cc42475f05752993e1370d385d5a27dca6800b2b2a985ef29aa9f5f-shm.mount: Deactivated successfully. Jan 13 20:41:41.925992 systemd[1]: var-lib-kubelet-pods-ca79dc98\x2dbdfa\x2d47a3\x2d8064\x2da0e6c9a68bec-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9qvnv.mount: Deactivated successfully. Jan 13 20:41:41.926077 systemd[1]: var-lib-kubelet-pods-ac606504\x2dfc05\x2d4200\x2d853c\x2d2d28f6d3f1de-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djzdlb.mount: Deactivated successfully. Jan 13 20:41:41.926176 systemd[1]: var-lib-kubelet-pods-ac606504\x2dfc05\x2d4200\x2d853c\x2d2d28f6d3f1de-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 20:41:41.926425 systemd[1]: var-lib-kubelet-pods-ac606504\x2dfc05\x2d4200\x2d853c\x2d2d28f6d3f1de-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 20:41:42.093352 systemd[1]: Removed slice kubepods-besteffort-podca79dc98_bdfa_47a3_8064_a0e6c9a68bec.slice - libcontainer container kubepods-besteffort-podca79dc98_bdfa_47a3_8064_a0e6c9a68bec.slice. Jan 13 20:41:42.140073 kubelet[3378]: I0113 20:41:42.139746 3378 scope.go:117] "RemoveContainer" containerID="81a3fe64b6daec471f2446b8f426cce00ab54e78945bb01c41798cfa013ee0eb" Jan 13 20:41:42.164092 containerd[1892]: time="2025-01-13T20:41:42.164036699Z" level=info msg="RemoveContainer for \"81a3fe64b6daec471f2446b8f426cce00ab54e78945bb01c41798cfa013ee0eb\"" Jan 13 20:41:42.175888 containerd[1892]: time="2025-01-13T20:41:42.175146404Z" level=info msg="RemoveContainer for \"81a3fe64b6daec471f2446b8f426cce00ab54e78945bb01c41798cfa013ee0eb\" returns successfully" Jan 13 20:41:42.176342 kubelet[3378]: I0113 20:41:42.176168 3378 scope.go:117] "RemoveContainer" containerID="81a3fe64b6daec471f2446b8f426cce00ab54e78945bb01c41798cfa013ee0eb" Jan 13 20:41:42.178235 containerd[1892]: time="2025-01-13T20:41:42.177551670Z" level=error msg="ContainerStatus for \"81a3fe64b6daec471f2446b8f426cce00ab54e78945bb01c41798cfa013ee0eb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"81a3fe64b6daec471f2446b8f426cce00ab54e78945bb01c41798cfa013ee0eb\": not found" Jan 13 20:41:42.179118 systemd[1]: Removed slice kubepods-burstable-podac606504_fc05_4200_853c_2d28f6d3f1de.slice - libcontainer container kubepods-burstable-podac606504_fc05_4200_853c_2d28f6d3f1de.slice. Jan 13 20:41:42.179997 systemd[1]: kubepods-burstable-podac606504_fc05_4200_853c_2d28f6d3f1de.slice: Consumed 8.817s CPU time. Jan 13 20:41:42.186183 kubelet[3378]: E0113 20:41:42.185942 3378 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"81a3fe64b6daec471f2446b8f426cce00ab54e78945bb01c41798cfa013ee0eb\": not found" containerID="81a3fe64b6daec471f2446b8f426cce00ab54e78945bb01c41798cfa013ee0eb" Jan 13 20:41:42.190261 kubelet[3378]: I0113 20:41:42.189659 3378 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"81a3fe64b6daec471f2446b8f426cce00ab54e78945bb01c41798cfa013ee0eb"} err="failed to get container status \"81a3fe64b6daec471f2446b8f426cce00ab54e78945bb01c41798cfa013ee0eb\": rpc error: code = NotFound desc = an error occurred when try to find container \"81a3fe64b6daec471f2446b8f426cce00ab54e78945bb01c41798cfa013ee0eb\": not found" Jan 13 20:41:42.190261 kubelet[3378]: I0113 20:41:42.190042 3378 scope.go:117] "RemoveContainer" containerID="dfb7589a70617b95604837492d4a76aa41758d4e048a2d7be5d2aff344d959a9" Jan 13 20:41:42.192182 containerd[1892]: time="2025-01-13T20:41:42.192066264Z" level=info msg="RemoveContainer for \"dfb7589a70617b95604837492d4a76aa41758d4e048a2d7be5d2aff344d959a9\"" Jan 13 20:41:42.199378 containerd[1892]: time="2025-01-13T20:41:42.199217371Z" level=info msg="RemoveContainer for \"dfb7589a70617b95604837492d4a76aa41758d4e048a2d7be5d2aff344d959a9\" returns successfully" Jan 13 20:41:42.199740 kubelet[3378]: I0113 20:41:42.199595 3378 scope.go:117] "RemoveContainer" containerID="217f2324a1fb14a1b172aba6d086a46c0c6eadb4abda7d3d5c4e90662c995286" Jan 13 20:41:42.201471 containerd[1892]: time="2025-01-13T20:41:42.201160249Z" level=info msg="RemoveContainer for \"217f2324a1fb14a1b172aba6d086a46c0c6eadb4abda7d3d5c4e90662c995286\"" Jan 13 20:41:42.215640 containerd[1892]: time="2025-01-13T20:41:42.214713514Z" level=info msg="RemoveContainer for \"217f2324a1fb14a1b172aba6d086a46c0c6eadb4abda7d3d5c4e90662c995286\" returns successfully" Jan 13 20:41:42.221649 kubelet[3378]: I0113 20:41:42.221400 3378 scope.go:117] "RemoveContainer" containerID="b0178885e2bb5a1d3138ddf379da542e433ab77fb26df4853840f01d291d200a" Jan 13 20:41:42.225543 containerd[1892]: time="2025-01-13T20:41:42.225506749Z" level=info msg="RemoveContainer for \"b0178885e2bb5a1d3138ddf379da542e433ab77fb26df4853840f01d291d200a\"" Jan 13 20:41:42.230882 containerd[1892]: time="2025-01-13T20:41:42.230830943Z" level=info msg="RemoveContainer for \"b0178885e2bb5a1d3138ddf379da542e433ab77fb26df4853840f01d291d200a\" returns successfully" Jan 13 20:41:42.231399 kubelet[3378]: I0113 20:41:42.231354 3378 scope.go:117] "RemoveContainer" containerID="49612f51fc2eb4f770b75670601e5fbd72edd673e8b3dbc7c608d80b6243aaef" Jan 13 20:41:42.232825 containerd[1892]: time="2025-01-13T20:41:42.232789154Z" level=info msg="RemoveContainer for \"49612f51fc2eb4f770b75670601e5fbd72edd673e8b3dbc7c608d80b6243aaef\"" Jan 13 20:41:42.241513 containerd[1892]: time="2025-01-13T20:41:42.241464348Z" level=info msg="RemoveContainer for \"49612f51fc2eb4f770b75670601e5fbd72edd673e8b3dbc7c608d80b6243aaef\" returns successfully" Jan 13 20:41:42.241850 kubelet[3378]: I0113 20:41:42.241821 3378 scope.go:117] "RemoveContainer" containerID="ea4ec0b13beac90330b07926771a56fc694f5d615d396d928b3001d536e85a37" Jan 13 20:41:42.243769 containerd[1892]: time="2025-01-13T20:41:42.243726135Z" level=info msg="RemoveContainer for \"ea4ec0b13beac90330b07926771a56fc694f5d615d396d928b3001d536e85a37\"" Jan 13 20:41:42.248540 containerd[1892]: time="2025-01-13T20:41:42.248484280Z" level=info msg="RemoveContainer for \"ea4ec0b13beac90330b07926771a56fc694f5d615d396d928b3001d536e85a37\" returns successfully" Jan 13 20:41:42.248849 kubelet[3378]: I0113 20:41:42.248778 3378 scope.go:117] "RemoveContainer" containerID="dfb7589a70617b95604837492d4a76aa41758d4e048a2d7be5d2aff344d959a9" Jan 13 20:41:42.249089 containerd[1892]: time="2025-01-13T20:41:42.249048554Z" level=error msg="ContainerStatus for \"dfb7589a70617b95604837492d4a76aa41758d4e048a2d7be5d2aff344d959a9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dfb7589a70617b95604837492d4a76aa41758d4e048a2d7be5d2aff344d959a9\": not found" Jan 13 20:41:42.249389 kubelet[3378]: E0113 20:41:42.249236 3378 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dfb7589a70617b95604837492d4a76aa41758d4e048a2d7be5d2aff344d959a9\": not found" containerID="dfb7589a70617b95604837492d4a76aa41758d4e048a2d7be5d2aff344d959a9" Jan 13 20:41:42.249389 kubelet[3378]: I0113 20:41:42.249271 3378 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dfb7589a70617b95604837492d4a76aa41758d4e048a2d7be5d2aff344d959a9"} err="failed to get container status \"dfb7589a70617b95604837492d4a76aa41758d4e048a2d7be5d2aff344d959a9\": rpc error: code = NotFound desc = an error occurred when try to find container \"dfb7589a70617b95604837492d4a76aa41758d4e048a2d7be5d2aff344d959a9\": not found" Jan 13 20:41:42.249389 kubelet[3378]: I0113 20:41:42.249291 3378 scope.go:117] "RemoveContainer" containerID="217f2324a1fb14a1b172aba6d086a46c0c6eadb4abda7d3d5c4e90662c995286" Jan 13 20:41:42.249843 containerd[1892]: time="2025-01-13T20:41:42.249803207Z" level=error msg="ContainerStatus for \"217f2324a1fb14a1b172aba6d086a46c0c6eadb4abda7d3d5c4e90662c995286\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"217f2324a1fb14a1b172aba6d086a46c0c6eadb4abda7d3d5c4e90662c995286\": not found" Jan 13 20:41:42.253455 kubelet[3378]: E0113 20:41:42.253189 3378 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"217f2324a1fb14a1b172aba6d086a46c0c6eadb4abda7d3d5c4e90662c995286\": not found" containerID="217f2324a1fb14a1b172aba6d086a46c0c6eadb4abda7d3d5c4e90662c995286" Jan 13 20:41:42.253455 kubelet[3378]: I0113 20:41:42.253238 3378 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"217f2324a1fb14a1b172aba6d086a46c0c6eadb4abda7d3d5c4e90662c995286"} err="failed to get container status \"217f2324a1fb14a1b172aba6d086a46c0c6eadb4abda7d3d5c4e90662c995286\": rpc error: code = NotFound desc = an error occurred when try to find container \"217f2324a1fb14a1b172aba6d086a46c0c6eadb4abda7d3d5c4e90662c995286\": not found" Jan 13 20:41:42.253455 kubelet[3378]: I0113 20:41:42.253311 3378 scope.go:117] "RemoveContainer" containerID="b0178885e2bb5a1d3138ddf379da542e433ab77fb26df4853840f01d291d200a" Jan 13 20:41:42.254243 containerd[1892]: time="2025-01-13T20:41:42.253771867Z" level=error msg="ContainerStatus for \"b0178885e2bb5a1d3138ddf379da542e433ab77fb26df4853840f01d291d200a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b0178885e2bb5a1d3138ddf379da542e433ab77fb26df4853840f01d291d200a\": not found" Jan 13 20:41:42.254790 kubelet[3378]: E0113 20:41:42.254404 3378 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b0178885e2bb5a1d3138ddf379da542e433ab77fb26df4853840f01d291d200a\": not found" containerID="b0178885e2bb5a1d3138ddf379da542e433ab77fb26df4853840f01d291d200a" Jan 13 20:41:42.254790 kubelet[3378]: I0113 20:41:42.254438 3378 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b0178885e2bb5a1d3138ddf379da542e433ab77fb26df4853840f01d291d200a"} err="failed to get container status \"b0178885e2bb5a1d3138ddf379da542e433ab77fb26df4853840f01d291d200a\": rpc error: code = NotFound desc = an error occurred when try to find container \"b0178885e2bb5a1d3138ddf379da542e433ab77fb26df4853840f01d291d200a\": not found" Jan 13 20:41:42.254790 kubelet[3378]: I0113 20:41:42.254463 3378 scope.go:117] "RemoveContainer" containerID="49612f51fc2eb4f770b75670601e5fbd72edd673e8b3dbc7c608d80b6243aaef" Jan 13 20:41:42.255401 containerd[1892]: time="2025-01-13T20:41:42.255065365Z" level=error msg="ContainerStatus for \"49612f51fc2eb4f770b75670601e5fbd72edd673e8b3dbc7c608d80b6243aaef\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"49612f51fc2eb4f770b75670601e5fbd72edd673e8b3dbc7c608d80b6243aaef\": not found" Jan 13 20:41:42.255479 kubelet[3378]: E0113 20:41:42.255235 3378 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"49612f51fc2eb4f770b75670601e5fbd72edd673e8b3dbc7c608d80b6243aaef\": not found" containerID="49612f51fc2eb4f770b75670601e5fbd72edd673e8b3dbc7c608d80b6243aaef" Jan 13 20:41:42.255583 kubelet[3378]: I0113 20:41:42.255289 3378 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"49612f51fc2eb4f770b75670601e5fbd72edd673e8b3dbc7c608d80b6243aaef"} err="failed to get container status \"49612f51fc2eb4f770b75670601e5fbd72edd673e8b3dbc7c608d80b6243aaef\": rpc error: code = NotFound desc = an error occurred when try to find container \"49612f51fc2eb4f770b75670601e5fbd72edd673e8b3dbc7c608d80b6243aaef\": not found" Jan 13 20:41:42.255583 kubelet[3378]: I0113 20:41:42.255569 3378 scope.go:117] "RemoveContainer" containerID="ea4ec0b13beac90330b07926771a56fc694f5d615d396d928b3001d536e85a37" Jan 13 20:41:42.256150 containerd[1892]: time="2025-01-13T20:41:42.256118998Z" level=error msg="ContainerStatus for \"ea4ec0b13beac90330b07926771a56fc694f5d615d396d928b3001d536e85a37\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ea4ec0b13beac90330b07926771a56fc694f5d615d396d928b3001d536e85a37\": not found" Jan 13 20:41:42.256666 kubelet[3378]: E0113 20:41:42.256245 3378 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ea4ec0b13beac90330b07926771a56fc694f5d615d396d928b3001d536e85a37\": not found" containerID="ea4ec0b13beac90330b07926771a56fc694f5d615d396d928b3001d536e85a37" Jan 13 20:41:42.256666 kubelet[3378]: I0113 20:41:42.256291 3378 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ea4ec0b13beac90330b07926771a56fc694f5d615d396d928b3001d536e85a37"} err="failed to get container status \"ea4ec0b13beac90330b07926771a56fc694f5d615d396d928b3001d536e85a37\": rpc error: code = NotFound desc = an error occurred when try to find container \"ea4ec0b13beac90330b07926771a56fc694f5d615d396d928b3001d536e85a37\": not found" Jan 13 20:41:42.450375 kubelet[3378]: E0113 20:41:42.450107 3378 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:41:42.679045 sshd[4991]: Connection closed by 139.178.89.65 port 50830 Jan 13 20:41:42.681007 sshd-session[4989]: pam_unix(sshd:session): session closed for user core Jan 13 20:41:42.685468 systemd[1]: sshd@25-172.31.21.52:22-139.178.89.65:50830.service: Deactivated successfully. Jan 13 20:41:42.688133 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 20:41:42.689565 systemd-logind[1872]: Session 26 logged out. Waiting for processes to exit. Jan 13 20:41:42.691060 systemd-logind[1872]: Removed session 26. Jan 13 20:41:42.719359 systemd[1]: Started sshd@26-172.31.21.52:22-139.178.89.65:36728.service - OpenSSH per-connection server daemon (139.178.89.65:36728). Jan 13 20:41:42.912697 sshd[5150]: Accepted publickey for core from 139.178.89.65 port 36728 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:41:42.914531 sshd-session[5150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:41:42.921591 systemd-logind[1872]: New session 27 of user core. Jan 13 20:41:42.935443 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 20:41:43.129847 ntpd[1866]: Deleting interface #11 lxc_health, fe80::4045:ddff:fea9:634b%8#123, interface stats: received=0, sent=0, dropped=0, active_time=76 secs Jan 13 20:41:43.130428 ntpd[1866]: 13 Jan 20:41:43 ntpd[1866]: Deleting interface #11 lxc_health, fe80::4045:ddff:fea9:634b%8#123, interface stats: received=0, sent=0, dropped=0, active_time=76 secs Jan 13 20:41:43.278831 kubelet[3378]: I0113 20:41:43.278420 3378 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac606504-fc05-4200-853c-2d28f6d3f1de" path="/var/lib/kubelet/pods/ac606504-fc05-4200-853c-2d28f6d3f1de/volumes" Jan 13 20:41:43.279792 kubelet[3378]: I0113 20:41:43.279766 3378 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca79dc98-bdfa-47a3-8064-a0e6c9a68bec" path="/var/lib/kubelet/pods/ca79dc98-bdfa-47a3-8064-a0e6c9a68bec/volumes" Jan 13 20:41:44.003401 kubelet[3378]: I0113 20:41:44.003314 3378 topology_manager.go:215] "Topology Admit Handler" podUID="d636b9f4-9b2a-4e5f-98cd-600916292b9f" podNamespace="kube-system" podName="cilium-wjggj" Jan 13 20:41:44.012393 kubelet[3378]: E0113 20:41:44.010840 3378 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ac606504-fc05-4200-853c-2d28f6d3f1de" containerName="mount-cgroup" Jan 13 20:41:44.012756 kubelet[3378]: E0113 20:41:44.012601 3378 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ac606504-fc05-4200-853c-2d28f6d3f1de" containerName="mount-bpf-fs" Jan 13 20:41:44.013436 kubelet[3378]: E0113 20:41:44.013316 3378 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ac606504-fc05-4200-853c-2d28f6d3f1de" containerName="cilium-agent" Jan 13 20:41:44.013436 kubelet[3378]: E0113 20:41:44.013359 3378 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ac606504-fc05-4200-853c-2d28f6d3f1de" containerName="apply-sysctl-overwrites" Jan 13 20:41:44.013436 kubelet[3378]: E0113 20:41:44.013370 3378 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ac606504-fc05-4200-853c-2d28f6d3f1de" containerName="clean-cilium-state" Jan 13 20:41:44.013436 kubelet[3378]: E0113 20:41:44.013379 3378 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ca79dc98-bdfa-47a3-8064-a0e6c9a68bec" containerName="cilium-operator" Jan 13 20:41:44.013940 sshd[5152]: Connection closed by 139.178.89.65 port 36728 Jan 13 20:41:44.019402 kubelet[3378]: I0113 20:41:44.013687 3378 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac606504-fc05-4200-853c-2d28f6d3f1de" containerName="cilium-agent" Jan 13 20:41:44.019402 kubelet[3378]: I0113 20:41:44.015296 3378 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca79dc98-bdfa-47a3-8064-a0e6c9a68bec" containerName="cilium-operator" Jan 13 20:41:44.021382 sshd-session[5150]: pam_unix(sshd:session): session closed for user core Jan 13 20:41:44.031362 systemd-logind[1872]: Session 27 logged out. Waiting for processes to exit. Jan 13 20:41:44.032350 systemd[1]: sshd@26-172.31.21.52:22-139.178.89.65:36728.service: Deactivated successfully. Jan 13 20:41:44.040900 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 20:41:44.066879 systemd-logind[1872]: Removed session 27. Jan 13 20:41:44.073943 systemd[1]: Started sshd@27-172.31.21.52:22-139.178.89.65:36742.service - OpenSSH per-connection server daemon (139.178.89.65:36742). Jan 13 20:41:44.144336 kubelet[3378]: I0113 20:41:44.143352 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d636b9f4-9b2a-4e5f-98cd-600916292b9f-xtables-lock\") pod \"cilium-wjggj\" (UID: \"d636b9f4-9b2a-4e5f-98cd-600916292b9f\") " pod="kube-system/cilium-wjggj" Jan 13 20:41:44.144336 kubelet[3378]: I0113 20:41:44.143429 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d636b9f4-9b2a-4e5f-98cd-600916292b9f-hostproc\") pod \"cilium-wjggj\" (UID: \"d636b9f4-9b2a-4e5f-98cd-600916292b9f\") " pod="kube-system/cilium-wjggj" Jan 13 20:41:44.144336 kubelet[3378]: I0113 20:41:44.143471 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d636b9f4-9b2a-4e5f-98cd-600916292b9f-lib-modules\") pod \"cilium-wjggj\" (UID: \"d636b9f4-9b2a-4e5f-98cd-600916292b9f\") " pod="kube-system/cilium-wjggj" Jan 13 20:41:44.144336 kubelet[3378]: I0113 20:41:44.143505 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d636b9f4-9b2a-4e5f-98cd-600916292b9f-clustermesh-secrets\") pod \"cilium-wjggj\" (UID: \"d636b9f4-9b2a-4e5f-98cd-600916292b9f\") " pod="kube-system/cilium-wjggj" Jan 13 20:41:44.144336 kubelet[3378]: I0113 20:41:44.143539 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d636b9f4-9b2a-4e5f-98cd-600916292b9f-cilium-ipsec-secrets\") pod \"cilium-wjggj\" (UID: \"d636b9f4-9b2a-4e5f-98cd-600916292b9f\") " pod="kube-system/cilium-wjggj" Jan 13 20:41:44.144336 kubelet[3378]: I0113 20:41:44.143568 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d636b9f4-9b2a-4e5f-98cd-600916292b9f-host-proc-sys-kernel\") pod \"cilium-wjggj\" (UID: \"d636b9f4-9b2a-4e5f-98cd-600916292b9f\") " pod="kube-system/cilium-wjggj" Jan 13 20:41:44.144965 kubelet[3378]: I0113 20:41:44.143621 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d636b9f4-9b2a-4e5f-98cd-600916292b9f-cilium-cgroup\") pod \"cilium-wjggj\" (UID: \"d636b9f4-9b2a-4e5f-98cd-600916292b9f\") " pod="kube-system/cilium-wjggj" Jan 13 20:41:44.144965 kubelet[3378]: I0113 20:41:44.143650 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d636b9f4-9b2a-4e5f-98cd-600916292b9f-hubble-tls\") pod \"cilium-wjggj\" (UID: \"d636b9f4-9b2a-4e5f-98cd-600916292b9f\") " pod="kube-system/cilium-wjggj" Jan 13 20:41:44.144965 kubelet[3378]: I0113 20:41:44.143686 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d636b9f4-9b2a-4e5f-98cd-600916292b9f-host-proc-sys-net\") pod \"cilium-wjggj\" (UID: \"d636b9f4-9b2a-4e5f-98cd-600916292b9f\") " pod="kube-system/cilium-wjggj" Jan 13 20:41:44.144965 kubelet[3378]: I0113 20:41:44.143715 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d636b9f4-9b2a-4e5f-98cd-600916292b9f-bpf-maps\") pod \"cilium-wjggj\" (UID: \"d636b9f4-9b2a-4e5f-98cd-600916292b9f\") " pod="kube-system/cilium-wjggj" Jan 13 20:41:44.144965 kubelet[3378]: I0113 20:41:44.143737 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d636b9f4-9b2a-4e5f-98cd-600916292b9f-etc-cni-netd\") pod \"cilium-wjggj\" (UID: \"d636b9f4-9b2a-4e5f-98cd-600916292b9f\") " pod="kube-system/cilium-wjggj" Jan 13 20:41:44.144965 kubelet[3378]: I0113 20:41:44.143766 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt9hj\" (UniqueName: \"kubernetes.io/projected/d636b9f4-9b2a-4e5f-98cd-600916292b9f-kube-api-access-qt9hj\") pod \"cilium-wjggj\" (UID: \"d636b9f4-9b2a-4e5f-98cd-600916292b9f\") " pod="kube-system/cilium-wjggj" Jan 13 20:41:44.145268 kubelet[3378]: I0113 20:41:44.143813 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d636b9f4-9b2a-4e5f-98cd-600916292b9f-cilium-run\") pod \"cilium-wjggj\" (UID: \"d636b9f4-9b2a-4e5f-98cd-600916292b9f\") " pod="kube-system/cilium-wjggj" Jan 13 20:41:44.145268 kubelet[3378]: I0113 20:41:44.143842 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d636b9f4-9b2a-4e5f-98cd-600916292b9f-cni-path\") pod \"cilium-wjggj\" (UID: \"d636b9f4-9b2a-4e5f-98cd-600916292b9f\") " pod="kube-system/cilium-wjggj" Jan 13 20:41:44.145268 kubelet[3378]: I0113 20:41:44.143871 3378 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d636b9f4-9b2a-4e5f-98cd-600916292b9f-cilium-config-path\") pod \"cilium-wjggj\" (UID: \"d636b9f4-9b2a-4e5f-98cd-600916292b9f\") " pod="kube-system/cilium-wjggj" Jan 13 20:41:44.154949 systemd[1]: Created slice kubepods-burstable-podd636b9f4_9b2a_4e5f_98cd_600916292b9f.slice - libcontainer container kubepods-burstable-podd636b9f4_9b2a_4e5f_98cd_600916292b9f.slice. Jan 13 20:41:44.284994 sshd[5163]: Accepted publickey for core from 139.178.89.65 port 36742 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:41:44.286445 sshd-session[5163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:41:44.319885 systemd-logind[1872]: New session 28 of user core. Jan 13 20:41:44.323861 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 13 20:41:44.442630 sshd[5170]: Connection closed by 139.178.89.65 port 36742 Jan 13 20:41:44.444371 sshd-session[5163]: pam_unix(sshd:session): session closed for user core Jan 13 20:41:44.449482 systemd[1]: sshd@27-172.31.21.52:22-139.178.89.65:36742.service: Deactivated successfully. Jan 13 20:41:44.452187 systemd[1]: session-28.scope: Deactivated successfully. Jan 13 20:41:44.453959 systemd-logind[1872]: Session 28 logged out. Waiting for processes to exit. Jan 13 20:41:44.455316 systemd-logind[1872]: Removed session 28. Jan 13 20:41:44.463951 containerd[1892]: time="2025-01-13T20:41:44.463907743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wjggj,Uid:d636b9f4-9b2a-4e5f-98cd-600916292b9f,Namespace:kube-system,Attempt:0,}" Jan 13 20:41:44.487877 systemd[1]: Started sshd@28-172.31.21.52:22-139.178.89.65:36758.service - OpenSSH per-connection server daemon (139.178.89.65:36758). Jan 13 20:41:44.517334 containerd[1892]: time="2025-01-13T20:41:44.517057169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:41:44.517334 containerd[1892]: time="2025-01-13T20:41:44.517188669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:41:44.518639 containerd[1892]: time="2025-01-13T20:41:44.517344202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:41:44.518639 containerd[1892]: time="2025-01-13T20:41:44.517463533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:41:44.546816 systemd[1]: Started cri-containerd-a4447509f1c68223a95aede0cb4b9e00eccb7502d5f9ff9f94de6fcf232d918e.scope - libcontainer container a4447509f1c68223a95aede0cb4b9e00eccb7502d5f9ff9f94de6fcf232d918e. Jan 13 20:41:44.580180 containerd[1892]: time="2025-01-13T20:41:44.579987099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wjggj,Uid:d636b9f4-9b2a-4e5f-98cd-600916292b9f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a4447509f1c68223a95aede0cb4b9e00eccb7502d5f9ff9f94de6fcf232d918e\"" Jan 13 20:41:44.587051 containerd[1892]: time="2025-01-13T20:41:44.587005402Z" level=info msg="CreateContainer within sandbox \"a4447509f1c68223a95aede0cb4b9e00eccb7502d5f9ff9f94de6fcf232d918e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:41:44.625129 containerd[1892]: time="2025-01-13T20:41:44.625080167Z" level=info msg="CreateContainer within sandbox \"a4447509f1c68223a95aede0cb4b9e00eccb7502d5f9ff9f94de6fcf232d918e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"21e58a13f25b86581dadeb905c05613c4f212ad3495c0fd195063d70224a4814\"" Jan 13 20:41:44.627114 containerd[1892]: time="2025-01-13T20:41:44.626770436Z" level=info msg="StartContainer for \"21e58a13f25b86581dadeb905c05613c4f212ad3495c0fd195063d70224a4814\"" Jan 13 20:41:44.662823 systemd[1]: Started cri-containerd-21e58a13f25b86581dadeb905c05613c4f212ad3495c0fd195063d70224a4814.scope - libcontainer container 21e58a13f25b86581dadeb905c05613c4f212ad3495c0fd195063d70224a4814. Jan 13 20:41:44.678739 sshd[5176]: Accepted publickey for core from 139.178.89.65 port 36758 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:41:44.679490 sshd-session[5176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:41:44.693013 systemd-logind[1872]: New session 29 of user core. Jan 13 20:41:44.700870 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 13 20:41:44.707664 containerd[1892]: time="2025-01-13T20:41:44.707009218Z" level=info msg="StartContainer for \"21e58a13f25b86581dadeb905c05613c4f212ad3495c0fd195063d70224a4814\" returns successfully" Jan 13 20:41:44.725715 systemd[1]: cri-containerd-21e58a13f25b86581dadeb905c05613c4f212ad3495c0fd195063d70224a4814.scope: Deactivated successfully. Jan 13 20:41:44.783407 containerd[1892]: time="2025-01-13T20:41:44.783316288Z" level=info msg="shim disconnected" id=21e58a13f25b86581dadeb905c05613c4f212ad3495c0fd195063d70224a4814 namespace=k8s.io Jan 13 20:41:44.783407 containerd[1892]: time="2025-01-13T20:41:44.783376348Z" level=warning msg="cleaning up after shim disconnected" id=21e58a13f25b86581dadeb905c05613c4f212ad3495c0fd195063d70224a4814 namespace=k8s.io Jan 13 20:41:44.783407 containerd[1892]: time="2025-01-13T20:41:44.783387917Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:41:44.803760 containerd[1892]: time="2025-01-13T20:41:44.803412043Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:41:44Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:41:45.195421 containerd[1892]: time="2025-01-13T20:41:45.195083641Z" level=info msg="CreateContainer within sandbox \"a4447509f1c68223a95aede0cb4b9e00eccb7502d5f9ff9f94de6fcf232d918e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:41:45.228674 containerd[1892]: time="2025-01-13T20:41:45.228573133Z" level=info msg="CreateContainer within sandbox \"a4447509f1c68223a95aede0cb4b9e00eccb7502d5f9ff9f94de6fcf232d918e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"706f8ac6812ea71a42819cbad35b22417140344099237bac5c42610c6088b3d6\"" Jan 13 20:41:45.231320 containerd[1892]: time="2025-01-13T20:41:45.230342579Z" level=info msg="StartContainer for \"706f8ac6812ea71a42819cbad35b22417140344099237bac5c42610c6088b3d6\"" Jan 13 20:41:45.341902 systemd[1]: Started cri-containerd-706f8ac6812ea71a42819cbad35b22417140344099237bac5c42610c6088b3d6.scope - libcontainer container 706f8ac6812ea71a42819cbad35b22417140344099237bac5c42610c6088b3d6. Jan 13 20:41:45.377591 containerd[1892]: time="2025-01-13T20:41:45.377535091Z" level=info msg="StartContainer for \"706f8ac6812ea71a42819cbad35b22417140344099237bac5c42610c6088b3d6\" returns successfully" Jan 13 20:41:45.390315 systemd[1]: cri-containerd-706f8ac6812ea71a42819cbad35b22417140344099237bac5c42610c6088b3d6.scope: Deactivated successfully. Jan 13 20:41:45.416318 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-706f8ac6812ea71a42819cbad35b22417140344099237bac5c42610c6088b3d6-rootfs.mount: Deactivated successfully. Jan 13 20:41:45.428480 containerd[1892]: time="2025-01-13T20:41:45.428409535Z" level=info msg="shim disconnected" id=706f8ac6812ea71a42819cbad35b22417140344099237bac5c42610c6088b3d6 namespace=k8s.io Jan 13 20:41:45.428480 containerd[1892]: time="2025-01-13T20:41:45.428472524Z" level=warning msg="cleaning up after shim disconnected" id=706f8ac6812ea71a42819cbad35b22417140344099237bac5c42610c6088b3d6 namespace=k8s.io Jan 13 20:41:45.428480 containerd[1892]: time="2025-01-13T20:41:45.428484286Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:41:46.183538 containerd[1892]: time="2025-01-13T20:41:46.183329438Z" level=info msg="CreateContainer within sandbox \"a4447509f1c68223a95aede0cb4b9e00eccb7502d5f9ff9f94de6fcf232d918e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:41:46.210133 containerd[1892]: time="2025-01-13T20:41:46.210086391Z" level=info msg="CreateContainer within sandbox \"a4447509f1c68223a95aede0cb4b9e00eccb7502d5f9ff9f94de6fcf232d918e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bfdb21a8f006cbb4cc8ee887854bb0c8007f6d09fb8d9ed68dc1a0164485d334\"" Jan 13 20:41:46.212543 containerd[1892]: time="2025-01-13T20:41:46.210590270Z" level=info msg="StartContainer for \"bfdb21a8f006cbb4cc8ee887854bb0c8007f6d09fb8d9ed68dc1a0164485d334\"" Jan 13 20:41:46.257851 systemd[1]: Started cri-containerd-bfdb21a8f006cbb4cc8ee887854bb0c8007f6d09fb8d9ed68dc1a0164485d334.scope - libcontainer container bfdb21a8f006cbb4cc8ee887854bb0c8007f6d09fb8d9ed68dc1a0164485d334. Jan 13 20:41:46.274653 kubelet[3378]: E0113 20:41:46.272939 3378 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-v65x8" podUID="23673e44-4f7a-405e-8f39-f218b881fae1" Jan 13 20:41:46.316737 containerd[1892]: time="2025-01-13T20:41:46.316596762Z" level=info msg="StartContainer for \"bfdb21a8f006cbb4cc8ee887854bb0c8007f6d09fb8d9ed68dc1a0164485d334\" returns successfully" Jan 13 20:41:46.326797 systemd[1]: cri-containerd-bfdb21a8f006cbb4cc8ee887854bb0c8007f6d09fb8d9ed68dc1a0164485d334.scope: Deactivated successfully. Jan 13 20:41:46.355553 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bfdb21a8f006cbb4cc8ee887854bb0c8007f6d09fb8d9ed68dc1a0164485d334-rootfs.mount: Deactivated successfully. Jan 13 20:41:46.367082 containerd[1892]: time="2025-01-13T20:41:46.367021328Z" level=info msg="shim disconnected" id=bfdb21a8f006cbb4cc8ee887854bb0c8007f6d09fb8d9ed68dc1a0164485d334 namespace=k8s.io Jan 13 20:41:46.367082 containerd[1892]: time="2025-01-13T20:41:46.367080725Z" level=warning msg="cleaning up after shim disconnected" id=bfdb21a8f006cbb4cc8ee887854bb0c8007f6d09fb8d9ed68dc1a0164485d334 namespace=k8s.io Jan 13 20:41:46.367550 containerd[1892]: time="2025-01-13T20:41:46.367091303Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:41:46.381371 containerd[1892]: time="2025-01-13T20:41:46.381292248Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:41:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:41:47.191900 containerd[1892]: time="2025-01-13T20:41:47.191779534Z" level=info msg="CreateContainer within sandbox \"a4447509f1c68223a95aede0cb4b9e00eccb7502d5f9ff9f94de6fcf232d918e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:41:47.233747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1309418860.mount: Deactivated successfully. Jan 13 20:41:47.235859 containerd[1892]: time="2025-01-13T20:41:47.235761953Z" level=info msg="CreateContainer within sandbox \"a4447509f1c68223a95aede0cb4b9e00eccb7502d5f9ff9f94de6fcf232d918e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"164d1d24f3a421cb1f20e6052b88f69c07544ddb0c387ebe4b26a39c938109cc\"" Jan 13 20:41:47.238655 containerd[1892]: time="2025-01-13T20:41:47.237457543Z" level=info msg="StartContainer for \"164d1d24f3a421cb1f20e6052b88f69c07544ddb0c387ebe4b26a39c938109cc\"" Jan 13 20:41:47.303841 systemd[1]: Started cri-containerd-164d1d24f3a421cb1f20e6052b88f69c07544ddb0c387ebe4b26a39c938109cc.scope - libcontainer container 164d1d24f3a421cb1f20e6052b88f69c07544ddb0c387ebe4b26a39c938109cc. Jan 13 20:41:47.346249 systemd[1]: cri-containerd-164d1d24f3a421cb1f20e6052b88f69c07544ddb0c387ebe4b26a39c938109cc.scope: Deactivated successfully. Jan 13 20:41:47.348215 containerd[1892]: time="2025-01-13T20:41:47.348155923Z" level=info msg="StartContainer for \"164d1d24f3a421cb1f20e6052b88f69c07544ddb0c387ebe4b26a39c938109cc\" returns successfully" Jan 13 20:41:47.376720 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-164d1d24f3a421cb1f20e6052b88f69c07544ddb0c387ebe4b26a39c938109cc-rootfs.mount: Deactivated successfully. Jan 13 20:41:47.387884 containerd[1892]: time="2025-01-13T20:41:47.387807418Z" level=info msg="shim disconnected" id=164d1d24f3a421cb1f20e6052b88f69c07544ddb0c387ebe4b26a39c938109cc namespace=k8s.io Jan 13 20:41:47.387884 containerd[1892]: time="2025-01-13T20:41:47.387871013Z" level=warning msg="cleaning up after shim disconnected" id=164d1d24f3a421cb1f20e6052b88f69c07544ddb0c387ebe4b26a39c938109cc namespace=k8s.io Jan 13 20:41:47.387884 containerd[1892]: time="2025-01-13T20:41:47.387883279Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:41:47.450985 kubelet[3378]: E0113 20:41:47.450838 3378 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:41:48.198648 containerd[1892]: time="2025-01-13T20:41:48.194761408Z" level=info msg="CreateContainer within sandbox \"a4447509f1c68223a95aede0cb4b9e00eccb7502d5f9ff9f94de6fcf232d918e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:41:48.258404 containerd[1892]: time="2025-01-13T20:41:48.258354517Z" level=info msg="CreateContainer within sandbox \"a4447509f1c68223a95aede0cb4b9e00eccb7502d5f9ff9f94de6fcf232d918e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ab2b10c82f9bac1b44dee2a8fec6a8ae3fa6af415ca1b943331c5c47a38f4dee\"" Jan 13 20:41:48.260201 containerd[1892]: time="2025-01-13T20:41:48.259082331Z" level=info msg="StartContainer for \"ab2b10c82f9bac1b44dee2a8fec6a8ae3fa6af415ca1b943331c5c47a38f4dee\"" Jan 13 20:41:48.275280 kubelet[3378]: E0113 20:41:48.272993 3378 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-v65x8" podUID="23673e44-4f7a-405e-8f39-f218b881fae1" Jan 13 20:41:48.307957 systemd[1]: Started cri-containerd-ab2b10c82f9bac1b44dee2a8fec6a8ae3fa6af415ca1b943331c5c47a38f4dee.scope - libcontainer container ab2b10c82f9bac1b44dee2a8fec6a8ae3fa6af415ca1b943331c5c47a38f4dee. Jan 13 20:41:48.363296 containerd[1892]: time="2025-01-13T20:41:48.363154719Z" level=info msg="StartContainer for \"ab2b10c82f9bac1b44dee2a8fec6a8ae3fa6af415ca1b943331c5c47a38f4dee\" returns successfully" Jan 13 20:41:49.276989 kubelet[3378]: I0113 20:41:49.276916 3378 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wjggj" podStartSLOduration=6.276891458 podStartE2EDuration="6.276891458s" podCreationTimestamp="2025-01-13 20:41:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:41:49.26990565 +0000 UTC m=+132.214830116" watchObservedRunningTime="2025-01-13 20:41:49.276891458 +0000 UTC m=+132.221815921" Jan 13 20:41:49.546636 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 13 20:41:50.272442 kubelet[3378]: E0113 20:41:50.272378 3378 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-v65x8" podUID="23673e44-4f7a-405e-8f39-f218b881fae1" Jan 13 20:41:50.360677 kubelet[3378]: I0113 20:41:50.360367 3378 setters.go:580] "Node became not ready" node="ip-172-31-21-52" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T20:41:50Z","lastTransitionTime":"2025-01-13T20:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 20:41:51.479558 systemd[1]: run-containerd-runc-k8s.io-ab2b10c82f9bac1b44dee2a8fec6a8ae3fa6af415ca1b943331c5c47a38f4dee-runc.497qP1.mount: Deactivated successfully. Jan 13 20:41:52.272862 kubelet[3378]: E0113 20:41:52.272816 3378 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-v65x8" podUID="23673e44-4f7a-405e-8f39-f218b881fae1" Jan 13 20:41:53.275257 systemd-networkd[1739]: lxc_health: Link UP Jan 13 20:41:53.286793 (udev-worker)[6027]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:41:53.295466 systemd-networkd[1739]: lxc_health: Gained carrier Jan 13 20:41:53.924845 systemd[1]: run-containerd-runc-k8s.io-ab2b10c82f9bac1b44dee2a8fec6a8ae3fa6af415ca1b943331c5c47a38f4dee-runc.UJOIGm.mount: Deactivated successfully. Jan 13 20:41:54.720819 systemd-networkd[1739]: lxc_health: Gained IPv6LL Jan 13 20:41:57.127871 ntpd[1866]: Listen normally on 14 lxc_health [fe80::14cd:ddff:fe36:aa62%14]:123 Jan 13 20:41:57.128406 ntpd[1866]: 13 Jan 20:41:57 ntpd[1866]: Listen normally on 14 lxc_health [fe80::14cd:ddff:fe36:aa62%14]:123 Jan 13 20:42:00.971830 sshd[5250]: Connection closed by 139.178.89.65 port 36758 Jan 13 20:42:00.975458 sshd-session[5176]: pam_unix(sshd:session): session closed for user core Jan 13 20:42:00.984002 systemd[1]: sshd@28-172.31.21.52:22-139.178.89.65:36758.service: Deactivated successfully. Jan 13 20:42:00.990211 systemd[1]: session-29.scope: Deactivated successfully. Jan 13 20:42:00.991858 systemd-logind[1872]: Session 29 logged out. Waiting for processes to exit. Jan 13 20:42:00.995204 systemd-logind[1872]: Removed session 29. Jan 13 20:42:25.654420 systemd[1]: cri-containerd-ee86b973aeb6d9df22ff82f13ce479c948f66714e76dac2a58f856602e6b62cd.scope: Deactivated successfully. Jan 13 20:42:25.656751 systemd[1]: cri-containerd-ee86b973aeb6d9df22ff82f13ce479c948f66714e76dac2a58f856602e6b62cd.scope: Consumed 3.366s CPU time, 24.2M memory peak, 0B memory swap peak. Jan 13 20:42:25.689224 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee86b973aeb6d9df22ff82f13ce479c948f66714e76dac2a58f856602e6b62cd-rootfs.mount: Deactivated successfully. Jan 13 20:42:25.697589 containerd[1892]: time="2025-01-13T20:42:25.697418456Z" level=info msg="shim disconnected" id=ee86b973aeb6d9df22ff82f13ce479c948f66714e76dac2a58f856602e6b62cd namespace=k8s.io Jan 13 20:42:25.697589 containerd[1892]: time="2025-01-13T20:42:25.697569464Z" level=warning msg="cleaning up after shim disconnected" id=ee86b973aeb6d9df22ff82f13ce479c948f66714e76dac2a58f856602e6b62cd namespace=k8s.io Jan 13 20:42:25.697589 containerd[1892]: time="2025-01-13T20:42:25.697583301Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:42:26.307422 kubelet[3378]: I0113 20:42:26.307369 3378 scope.go:117] "RemoveContainer" containerID="ee86b973aeb6d9df22ff82f13ce479c948f66714e76dac2a58f856602e6b62cd" Jan 13 20:42:26.311526 containerd[1892]: time="2025-01-13T20:42:26.311484977Z" level=info msg="CreateContainer within sandbox \"50d03434c4a5cd095476c93b05b25475e21a70e46b5ed5175948b12f664f4b89\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 13 20:42:26.348250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2983323988.mount: Deactivated successfully. Jan 13 20:42:26.360732 containerd[1892]: time="2025-01-13T20:42:26.360681509Z" level=info msg="CreateContainer within sandbox \"50d03434c4a5cd095476c93b05b25475e21a70e46b5ed5175948b12f664f4b89\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"04e9385b63dd8bb7ff780147ec2a667b36cb48b90579a8d52ebbb77ce1cdd0b2\"" Jan 13 20:42:26.361427 containerd[1892]: time="2025-01-13T20:42:26.361385509Z" level=info msg="StartContainer for \"04e9385b63dd8bb7ff780147ec2a667b36cb48b90579a8d52ebbb77ce1cdd0b2\"" Jan 13 20:42:26.414866 systemd[1]: Started cri-containerd-04e9385b63dd8bb7ff780147ec2a667b36cb48b90579a8d52ebbb77ce1cdd0b2.scope - libcontainer container 04e9385b63dd8bb7ff780147ec2a667b36cb48b90579a8d52ebbb77ce1cdd0b2. Jan 13 20:42:26.474068 containerd[1892]: time="2025-01-13T20:42:26.474010842Z" level=info msg="StartContainer for \"04e9385b63dd8bb7ff780147ec2a667b36cb48b90579a8d52ebbb77ce1cdd0b2\" returns successfully" Jan 13 20:42:31.281862 systemd[1]: cri-containerd-0ff1a239fbc48f7d20239da3bcafb52ba8163c03a8ff889c7d45f437130a4040.scope: Deactivated successfully. Jan 13 20:42:31.282152 systemd[1]: cri-containerd-0ff1a239fbc48f7d20239da3bcafb52ba8163c03a8ff889c7d45f437130a4040.scope: Consumed 1.428s CPU time, 17.2M memory peak, 0B memory swap peak. Jan 13 20:42:31.327253 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ff1a239fbc48f7d20239da3bcafb52ba8163c03a8ff889c7d45f437130a4040-rootfs.mount: Deactivated successfully. Jan 13 20:42:31.406033 containerd[1892]: time="2025-01-13T20:42:31.405966064Z" level=info msg="shim disconnected" id=0ff1a239fbc48f7d20239da3bcafb52ba8163c03a8ff889c7d45f437130a4040 namespace=k8s.io Jan 13 20:42:31.406033 containerd[1892]: time="2025-01-13T20:42:31.406026454Z" level=warning msg="cleaning up after shim disconnected" id=0ff1a239fbc48f7d20239da3bcafb52ba8163c03a8ff889c7d45f437130a4040 namespace=k8s.io Jan 13 20:42:31.406033 containerd[1892]: time="2025-01-13T20:42:31.406038055Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:42:31.425338 containerd[1892]: time="2025-01-13T20:42:31.425285551Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:42:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:42:31.643138 kubelet[3378]: E0113 20:42:31.643051 3378 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-52?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 13 20:42:32.355033 kubelet[3378]: I0113 20:42:32.354998 3378 scope.go:117] "RemoveContainer" containerID="0ff1a239fbc48f7d20239da3bcafb52ba8163c03a8ff889c7d45f437130a4040" Jan 13 20:42:32.361313 containerd[1892]: time="2025-01-13T20:42:32.361274704Z" level=info msg="CreateContainer within sandbox \"e5770a3867802437459b61f61c39b68ea33083cbc90bd9115a91be78d91f3900\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 13 20:42:32.408060 containerd[1892]: time="2025-01-13T20:42:32.408003649Z" level=info msg="CreateContainer within sandbox \"e5770a3867802437459b61f61c39b68ea33083cbc90bd9115a91be78d91f3900\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"0567c98d8d532fb46a83c314044b653e3a5066df5a5a4f331936382d50b8cb11\"" Jan 13 20:42:32.409243 containerd[1892]: time="2025-01-13T20:42:32.409205245Z" level=info msg="StartContainer for \"0567c98d8d532fb46a83c314044b653e3a5066df5a5a4f331936382d50b8cb11\"" Jan 13 20:42:32.510446 systemd[1]: run-containerd-runc-k8s.io-0567c98d8d532fb46a83c314044b653e3a5066df5a5a4f331936382d50b8cb11-runc.Dbe0RC.mount: Deactivated successfully. Jan 13 20:42:32.521893 systemd[1]: Started cri-containerd-0567c98d8d532fb46a83c314044b653e3a5066df5a5a4f331936382d50b8cb11.scope - libcontainer container 0567c98d8d532fb46a83c314044b653e3a5066df5a5a4f331936382d50b8cb11. Jan 13 20:42:32.598780 containerd[1892]: time="2025-01-13T20:42:32.598706722Z" level=info msg="StartContainer for \"0567c98d8d532fb46a83c314044b653e3a5066df5a5a4f331936382d50b8cb11\" returns successfully"