Jan 13 20:45:15.030288 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:01:45 -00 2025 Jan 13 20:45:15.030336 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:45:15.030346 kernel: BIOS-provided physical RAM map: Jan 13 20:45:15.030353 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 20:45:15.030359 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 20:45:15.030366 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 20:45:15.030376 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jan 13 20:45:15.030383 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jan 13 20:45:15.030390 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jan 13 20:45:15.030397 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 20:45:15.030403 kernel: NX (Execute Disable) protection: active Jan 13 20:45:15.030410 kernel: APIC: Static calls initialized Jan 13 20:45:15.030417 kernel: SMBIOS 2.7 present. Jan 13 20:45:15.030424 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 13 20:45:15.030434 kernel: Hypervisor detected: KVM Jan 13 20:45:15.030442 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 20:45:15.030450 kernel: kvm-clock: using sched offset of 9489773964 cycles Jan 13 20:45:15.030458 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 20:45:15.030466 kernel: tsc: Detected 2499.998 MHz processor Jan 13 20:45:15.030474 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 20:45:15.030482 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 20:45:15.030492 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jan 13 20:45:15.030500 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 20:45:15.030508 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 20:45:15.030515 kernel: Using GB pages for direct mapping Jan 13 20:45:15.030523 kernel: ACPI: Early table checksum verification disabled Jan 13 20:45:15.030531 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jan 13 20:45:15.030539 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jan 13 20:45:15.030546 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 13 20:45:15.030554 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 13 20:45:15.030564 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jan 13 20:45:15.030572 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 13 20:45:15.030580 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 13 20:45:15.030587 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 13 20:45:15.030595 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 13 20:45:15.030603 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 13 20:45:15.030610 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 13 20:45:15.030618 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 13 20:45:15.030626 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jan 13 20:45:15.030636 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jan 13 20:45:15.030647 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jan 13 20:45:15.030655 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jan 13 20:45:15.030663 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jan 13 20:45:15.030672 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jan 13 20:45:15.030683 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jan 13 20:45:15.030691 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jan 13 20:45:15.030699 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jan 13 20:45:15.030707 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jan 13 20:45:15.030715 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 13 20:45:15.030724 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 13 20:45:15.030732 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 13 20:45:15.030740 kernel: NUMA: Initialized distance table, cnt=1 Jan 13 20:45:15.030748 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jan 13 20:45:15.030759 kernel: Zone ranges: Jan 13 20:45:15.030767 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 20:45:15.030775 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jan 13 20:45:15.030783 kernel: Normal empty Jan 13 20:45:15.030791 kernel: Movable zone start for each node Jan 13 20:45:15.030800 kernel: Early memory node ranges Jan 13 20:45:15.030808 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 20:45:15.030816 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jan 13 20:45:15.030824 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jan 13 20:45:15.030832 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 20:45:15.030843 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 20:45:15.030851 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jan 13 20:45:15.030859 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 13 20:45:15.030867 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 20:45:15.030875 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 13 20:45:15.030884 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 20:45:15.030892 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 20:45:15.030900 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 20:45:15.030908 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 20:45:15.030919 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 20:45:15.030927 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 20:45:15.030935 kernel: TSC deadline timer available Jan 13 20:45:15.030943 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 13 20:45:15.030951 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 20:45:15.030959 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 13 20:45:15.030967 kernel: Booting paravirtualized kernel on KVM Jan 13 20:45:15.030975 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 20:45:15.030984 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 13 20:45:15.030994 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 13 20:45:15.031003 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 13 20:45:15.031011 kernel: pcpu-alloc: [0] 0 1 Jan 13 20:45:15.031019 kernel: kvm-guest: PV spinlocks enabled Jan 13 20:45:15.031027 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 20:45:15.031036 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:45:15.031045 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:45:15.031053 kernel: random: crng init done Jan 13 20:45:15.031063 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:45:15.031072 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 13 20:45:15.031080 kernel: Fallback order for Node 0: 0 Jan 13 20:45:15.031088 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jan 13 20:45:15.031096 kernel: Policy zone: DMA32 Jan 13 20:45:15.031104 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:45:15.031113 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 125152K reserved, 0K cma-reserved) Jan 13 20:45:15.031121 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 20:45:15.031129 kernel: Kernel/User page tables isolation: enabled Jan 13 20:45:15.031140 kernel: ftrace: allocating 37920 entries in 149 pages Jan 13 20:45:15.031148 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 20:45:15.031156 kernel: Dynamic Preempt: voluntary Jan 13 20:45:15.031164 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:45:15.031174 kernel: rcu: RCU event tracing is enabled. Jan 13 20:45:15.031182 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 20:45:15.031190 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:45:15.031199 kernel: Rude variant of Tasks RCU enabled. Jan 13 20:45:15.031207 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:45:15.031218 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:45:15.031226 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 20:45:15.031234 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 13 20:45:15.031242 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:45:15.031251 kernel: Console: colour VGA+ 80x25 Jan 13 20:45:15.031259 kernel: printk: console [ttyS0] enabled Jan 13 20:45:15.031267 kernel: ACPI: Core revision 20230628 Jan 13 20:45:15.031275 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 13 20:45:15.031283 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 20:45:15.031294 kernel: x2apic enabled Jan 13 20:45:15.031302 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 20:45:15.031327 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 13 20:45:15.031338 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jan 13 20:45:15.031347 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 13 20:45:15.031356 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 13 20:45:15.031364 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 20:45:15.031373 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 20:45:15.031381 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 20:45:15.031390 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 20:45:15.031399 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 13 20:45:15.031407 kernel: RETBleed: Vulnerable Jan 13 20:45:15.031418 kernel: Speculative Store Bypass: Vulnerable Jan 13 20:45:15.031427 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 20:45:15.031436 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 20:45:15.031444 kernel: GDS: Unknown: Dependent on hypervisor status Jan 13 20:45:15.031452 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 20:45:15.031461 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 20:45:15.031470 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 20:45:15.031481 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 13 20:45:15.031489 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 13 20:45:15.031498 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 13 20:45:15.031507 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 13 20:45:15.031515 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 13 20:45:15.031524 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 13 20:45:15.031532 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 20:45:15.031541 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 13 20:45:15.031549 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 13 20:45:15.031558 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 13 20:45:15.031566 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 13 20:45:15.031577 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 13 20:45:15.031586 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 13 20:45:15.031594 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 13 20:45:15.031603 kernel: Freeing SMP alternatives memory: 32K Jan 13 20:45:15.031612 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:45:15.031620 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:45:15.031629 kernel: landlock: Up and running. Jan 13 20:45:15.031637 kernel: SELinux: Initializing. Jan 13 20:45:15.031646 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 20:45:15.031654 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 20:45:15.031663 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8175M CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x4) Jan 13 20:45:15.031674 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:45:15.031683 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:45:15.031692 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:45:15.031701 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 13 20:45:15.031710 kernel: signal: max sigframe size: 3632 Jan 13 20:45:15.031719 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:45:15.031727 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:45:15.031736 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 13 20:45:15.031745 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:45:15.031756 kernel: smpboot: x86: Booting SMP configuration: Jan 13 20:45:15.031765 kernel: .... node #0, CPUs: #1 Jan 13 20:45:15.031774 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 13 20:45:15.031783 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 13 20:45:15.031792 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 20:45:15.031800 kernel: smpboot: Max logical packages: 1 Jan 13 20:45:15.031809 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jan 13 20:45:15.031818 kernel: devtmpfs: initialized Jan 13 20:45:15.031829 kernel: x86/mm: Memory block size: 128MB Jan 13 20:45:15.031838 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:45:15.031847 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 20:45:15.031856 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:45:15.031865 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:45:15.031873 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:45:15.031882 kernel: audit: type=2000 audit(1736801114.290:1): state=initialized audit_enabled=0 res=1 Jan 13 20:45:15.031891 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:45:15.031900 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 20:45:15.031911 kernel: cpuidle: using governor menu Jan 13 20:45:15.031919 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:45:15.031928 kernel: dca service started, version 1.12.1 Jan 13 20:45:15.031937 kernel: PCI: Using configuration type 1 for base access Jan 13 20:45:15.031945 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 20:45:15.031954 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:45:15.031963 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:45:15.031971 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:45:15.031980 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:45:15.031991 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:45:15.032000 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:45:15.032008 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:45:15.032017 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:45:15.032026 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 13 20:45:15.032034 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 20:45:15.032043 kernel: ACPI: Interpreter enabled Jan 13 20:45:15.032052 kernel: ACPI: PM: (supports S0 S5) Jan 13 20:45:15.032060 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 20:45:15.032069 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 20:45:15.032080 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 20:45:15.032089 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 13 20:45:15.032109 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:45:15.032284 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:45:15.032397 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 13 20:45:15.032490 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 13 20:45:15.032501 kernel: acpiphp: Slot [3] registered Jan 13 20:45:15.032513 kernel: acpiphp: Slot [4] registered Jan 13 20:45:15.032522 kernel: acpiphp: Slot [5] registered Jan 13 20:45:15.032531 kernel: acpiphp: Slot [6] registered Jan 13 20:45:15.032539 kernel: acpiphp: Slot [7] registered Jan 13 20:45:15.032548 kernel: acpiphp: Slot [8] registered Jan 13 20:45:15.032557 kernel: acpiphp: Slot [9] registered Jan 13 20:45:15.032566 kernel: acpiphp: Slot [10] registered Jan 13 20:45:15.032574 kernel: acpiphp: Slot [11] registered Jan 13 20:45:15.032583 kernel: acpiphp: Slot [12] registered Jan 13 20:45:15.032596 kernel: acpiphp: Slot [13] registered Jan 13 20:45:15.032605 kernel: acpiphp: Slot [14] registered Jan 13 20:45:15.032614 kernel: acpiphp: Slot [15] registered Jan 13 20:45:15.032623 kernel: acpiphp: Slot [16] registered Jan 13 20:45:15.032631 kernel: acpiphp: Slot [17] registered Jan 13 20:45:15.032640 kernel: acpiphp: Slot [18] registered Jan 13 20:45:15.032649 kernel: acpiphp: Slot [19] registered Jan 13 20:45:15.032657 kernel: acpiphp: Slot [20] registered Jan 13 20:45:15.032666 kernel: acpiphp: Slot [21] registered Jan 13 20:45:15.032675 kernel: acpiphp: Slot [22] registered Jan 13 20:45:15.032686 kernel: acpiphp: Slot [23] registered Jan 13 20:45:15.032694 kernel: acpiphp: Slot [24] registered Jan 13 20:45:15.032703 kernel: acpiphp: Slot [25] registered Jan 13 20:45:15.032712 kernel: acpiphp: Slot [26] registered Jan 13 20:45:15.032720 kernel: acpiphp: Slot [27] registered Jan 13 20:45:15.032729 kernel: acpiphp: Slot [28] registered Jan 13 20:45:15.032737 kernel: acpiphp: Slot [29] registered Jan 13 20:45:15.032746 kernel: acpiphp: Slot [30] registered Jan 13 20:45:15.032755 kernel: acpiphp: Slot [31] registered Jan 13 20:45:15.032766 kernel: PCI host bridge to bus 0000:00 Jan 13 20:45:15.032862 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 20:45:15.032944 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 20:45:15.033242 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 20:45:15.033360 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 13 20:45:15.033440 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:45:15.033544 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 13 20:45:15.033647 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 13 20:45:15.033743 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 13 20:45:15.033833 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 13 20:45:15.033923 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jan 13 20:45:15.034012 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 13 20:45:15.034102 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 13 20:45:15.034191 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 13 20:45:15.034284 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 13 20:45:15.034389 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 13 20:45:15.034479 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 13 20:45:15.034569 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x180 took 12695 usecs Jan 13 20:45:15.034663 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 13 20:45:15.034752 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jan 13 20:45:15.034853 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 13 20:45:15.034942 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 20:45:15.035036 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 13 20:45:15.035126 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jan 13 20:45:15.035226 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 13 20:45:15.035316 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jan 13 20:45:15.035339 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 20:45:15.035351 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 20:45:15.035361 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 20:45:15.035369 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 20:45:15.035379 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 13 20:45:15.035388 kernel: iommu: Default domain type: Translated Jan 13 20:45:15.035397 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 20:45:15.035406 kernel: PCI: Using ACPI for IRQ routing Jan 13 20:45:15.035415 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 20:45:15.035424 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 20:45:15.035435 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jan 13 20:45:15.035832 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 13 20:45:15.035943 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 13 20:45:15.036036 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 20:45:15.036048 kernel: vgaarb: loaded Jan 13 20:45:15.036058 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 13 20:45:15.036067 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 13 20:45:15.036076 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 20:45:15.036085 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:45:15.036110 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:45:15.036119 kernel: pnp: PnP ACPI init Jan 13 20:45:15.036129 kernel: pnp: PnP ACPI: found 5 devices Jan 13 20:45:15.036137 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 20:45:15.036147 kernel: NET: Registered PF_INET protocol family Jan 13 20:45:15.036156 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:45:15.036165 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 13 20:45:15.036173 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:45:15.036185 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 20:45:15.036194 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 13 20:45:15.036203 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 13 20:45:15.036212 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 20:45:15.036221 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 20:45:15.036230 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:45:15.036239 kernel: NET: Registered PF_XDP protocol family Jan 13 20:45:15.036426 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 20:45:15.036539 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 20:45:15.036703 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 20:45:15.036790 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 13 20:45:15.036885 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 13 20:45:15.036897 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:45:15.036907 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 13 20:45:15.036916 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 13 20:45:15.036925 kernel: clocksource: Switched to clocksource tsc Jan 13 20:45:15.036934 kernel: Initialise system trusted keyrings Jan 13 20:45:15.036947 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 13 20:45:15.036956 kernel: Key type asymmetric registered Jan 13 20:45:15.036966 kernel: Asymmetric key parser 'x509' registered Jan 13 20:45:15.036974 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 20:45:15.036983 kernel: io scheduler mq-deadline registered Jan 13 20:45:15.036992 kernel: io scheduler kyber registered Jan 13 20:45:15.037001 kernel: io scheduler bfq registered Jan 13 20:45:15.037010 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 20:45:15.037019 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:45:15.037030 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 20:45:15.037040 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 20:45:15.037048 kernel: i8042: Warning: Keylock active Jan 13 20:45:15.037057 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 20:45:15.037066 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 20:45:15.037166 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 13 20:45:15.037353 kernel: rtc_cmos 00:00: registered as rtc0 Jan 13 20:45:15.037445 kernel: rtc_cmos 00:00: setting system clock to 2025-01-13T20:45:14 UTC (1736801114) Jan 13 20:45:15.037535 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 13 20:45:15.037547 kernel: intel_pstate: CPU model not supported Jan 13 20:45:15.037556 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:45:15.037565 kernel: Segment Routing with IPv6 Jan 13 20:45:15.037573 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:45:15.037582 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:45:15.037591 kernel: Key type dns_resolver registered Jan 13 20:45:15.037600 kernel: IPI shorthand broadcast: enabled Jan 13 20:45:15.037609 kernel: sched_clock: Marking stable (690066998, 290486937)->(1090635040, -110081105) Jan 13 20:45:15.037620 kernel: registered taskstats version 1 Jan 13 20:45:15.037629 kernel: Loading compiled-in X.509 certificates Jan 13 20:45:15.037638 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 98739e9049f62881f4df7ffd1e39335f7f55b344' Jan 13 20:45:15.037648 kernel: Key type .fscrypt registered Jan 13 20:45:15.037656 kernel: Key type fscrypt-provisioning registered Jan 13 20:45:15.037665 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:45:15.037674 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:45:15.037683 kernel: ima: No architecture policies found Jan 13 20:45:15.037694 kernel: clk: Disabling unused clocks Jan 13 20:45:15.037703 kernel: Freeing unused kernel image (initmem) memory: 42976K Jan 13 20:45:15.037712 kernel: Write protecting the kernel read-only data: 36864k Jan 13 20:45:15.037721 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 13 20:45:15.037729 kernel: Run /init as init process Jan 13 20:45:15.037738 kernel: with arguments: Jan 13 20:45:15.037747 kernel: /init Jan 13 20:45:15.037756 kernel: with environment: Jan 13 20:45:15.037765 kernel: HOME=/ Jan 13 20:45:15.037773 kernel: TERM=linux Jan 13 20:45:15.037787 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:45:15.037815 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:45:15.037827 systemd[1]: Detected virtualization amazon. Jan 13 20:45:15.037836 systemd[1]: Detected architecture x86-64. Jan 13 20:45:15.037846 systemd[1]: Running in initrd. Jan 13 20:45:15.037855 systemd[1]: No hostname configured, using default hostname. Jan 13 20:45:15.037864 systemd[1]: Hostname set to . Jan 13 20:45:15.037877 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:45:15.037886 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:45:15.037896 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:45:15.037905 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:45:15.037916 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:45:15.040016 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:45:15.040036 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:45:15.040051 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:45:15.040077 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:45:15.040111 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:45:15.040124 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:45:15.040138 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:45:15.040153 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:45:15.040167 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:45:15.040185 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:45:15.040201 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:45:15.040215 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:45:15.040229 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:45:15.040244 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:45:15.040259 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:45:15.040275 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:45:15.040290 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:45:15.040306 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:45:15.040364 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:45:15.040380 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:45:15.040397 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:45:15.040414 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:45:15.040430 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:45:15.040453 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:45:15.040469 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:45:15.040488 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 20:45:15.040509 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:45:15.040528 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:45:15.040583 systemd-journald[179]: Collecting audit messages is disabled. Jan 13 20:45:15.040631 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:45:15.040652 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:45:15.040674 systemd-journald[179]: Journal started Jan 13 20:45:15.040718 systemd-journald[179]: Runtime Journal (/run/log/journal/ec21360f618e7bf073126963f0b4025f) is 4.8M, max 38.6M, 33.7M free. Jan 13 20:45:15.045602 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:45:15.044283 systemd-modules-load[180]: Inserted module 'overlay' Jan 13 20:45:15.069798 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:45:15.256710 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:45:15.256755 kernel: Bridge firewalling registered Jan 13 20:45:15.091481 systemd-modules-load[180]: Inserted module 'br_netfilter' Jan 13 20:45:15.267180 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:45:15.269882 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:45:15.271985 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:45:15.295633 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:45:15.301792 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:45:15.303645 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:45:15.312740 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:45:15.340613 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:45:15.348784 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:45:15.359981 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:45:15.363424 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:45:15.365532 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:45:15.394467 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:45:15.415413 dracut-cmdline[210]: dracut-dracut-053 Jan 13 20:45:15.449501 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:45:15.457153 systemd-resolved[214]: Positive Trust Anchors: Jan 13 20:45:15.457170 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:45:15.457239 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:45:15.461670 systemd-resolved[214]: Defaulting to hostname 'linux'. Jan 13 20:45:15.463276 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:45:15.471066 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:45:15.599353 kernel: SCSI subsystem initialized Jan 13 20:45:15.610360 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:45:15.623350 kernel: iscsi: registered transport (tcp) Jan 13 20:45:15.650412 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:45:15.651182 kernel: QLogic iSCSI HBA Driver Jan 13 20:45:15.707931 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:45:15.716882 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:45:15.763357 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:45:15.763448 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:45:15.763475 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:45:15.804357 kernel: raid6: avx512x4 gen() 18367 MB/s Jan 13 20:45:15.821351 kernel: raid6: avx512x2 gen() 18041 MB/s Jan 13 20:45:15.838356 kernel: raid6: avx512x1 gen() 17075 MB/s Jan 13 20:45:15.855354 kernel: raid6: avx2x4 gen() 17081 MB/s Jan 13 20:45:15.872361 kernel: raid6: avx2x2 gen() 16305 MB/s Jan 13 20:45:15.889643 kernel: raid6: avx2x1 gen() 13057 MB/s Jan 13 20:45:15.889725 kernel: raid6: using algorithm avx512x4 gen() 18367 MB/s Jan 13 20:45:15.907650 kernel: raid6: .... xor() 7470 MB/s, rmw enabled Jan 13 20:45:15.907712 kernel: raid6: using avx512x2 recovery algorithm Jan 13 20:45:15.931451 kernel: xor: automatically using best checksumming function avx Jan 13 20:45:16.094349 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:45:16.105271 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:45:16.112498 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:45:16.126853 systemd-udevd[397]: Using default interface naming scheme 'v255'. Jan 13 20:45:16.131560 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:45:16.144266 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:45:16.167931 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Jan 13 20:45:16.209678 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:45:16.228517 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:45:16.304201 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:45:16.314544 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:45:16.343070 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:45:16.347045 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:45:16.351933 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:45:16.354945 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:45:16.363593 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:45:16.407696 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:45:16.422660 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 20:45:16.437758 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:45:16.437921 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:45:16.448082 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:45:16.454572 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:45:16.467947 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 20:45:16.467984 kernel: AES CTR mode by8 optimization enabled Jan 13 20:45:16.468012 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 13 20:45:16.487765 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 13 20:45:16.487971 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 13 20:45:16.488155 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 13 20:45:16.488383 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 13 20:45:16.488405 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:5c:28:24:12:af Jan 13 20:45:16.454817 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:45:16.459917 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:45:16.471693 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:45:16.501988 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 13 20:45:16.514352 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:45:16.514427 kernel: GPT:9289727 != 16777215 Jan 13 20:45:16.514449 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:45:16.514469 kernel: GPT:9289727 != 16777215 Jan 13 20:45:16.514485 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:45:16.514505 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:45:16.519143 (udev-worker)[450]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:45:16.747120 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (455) Jan 13 20:45:16.747165 kernel: BTRFS: device fsid 5e7921ba-229a-48a0-bc77-9b30aaa34aeb devid 1 transid 36 /dev/nvme0n1p3 scanned by (udev-worker) (458) Jan 13 20:45:16.749809 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:45:16.761627 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:45:16.787991 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 20:45:16.816666 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 13 20:45:16.827572 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 13 20:45:16.828057 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:45:16.846802 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 13 20:45:16.846977 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 13 20:45:16.859750 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:45:16.874345 disk-uuid[629]: Primary Header is updated. Jan 13 20:45:16.874345 disk-uuid[629]: Secondary Entries is updated. Jan 13 20:45:16.874345 disk-uuid[629]: Secondary Header is updated. Jan 13 20:45:16.882435 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:45:16.905511 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:45:17.897490 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:45:17.897567 disk-uuid[630]: The operation has completed successfully. Jan 13 20:45:18.076765 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:45:18.076890 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:45:18.102547 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:45:18.109863 sh[890]: Success Jan 13 20:45:18.131355 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 13 20:45:18.279665 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:45:18.294474 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:45:18.297250 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:45:18.345296 kernel: BTRFS info (device dm-0): first mount of filesystem 5e7921ba-229a-48a0-bc77-9b30aaa34aeb Jan 13 20:45:18.345408 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:45:18.345435 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:45:18.346580 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:45:18.347532 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:45:18.497368 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 20:45:18.511935 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:45:18.514625 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:45:18.524734 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:45:18.532220 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:45:18.555939 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:45:18.556023 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:45:18.556048 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:45:18.564355 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:45:18.579756 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:45:18.583633 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:45:18.592885 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:45:18.603522 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:45:18.681168 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:45:18.692720 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:45:18.730935 systemd-networkd[1082]: lo: Link UP Jan 13 20:45:18.730954 systemd-networkd[1082]: lo: Gained carrier Jan 13 20:45:18.733253 systemd-networkd[1082]: Enumeration completed Jan 13 20:45:18.734235 systemd-networkd[1082]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:45:18.734242 systemd-networkd[1082]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:45:18.735456 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:45:18.749932 systemd[1]: Reached target network.target - Network. Jan 13 20:45:18.754390 systemd-networkd[1082]: eth0: Link UP Jan 13 20:45:18.754401 systemd-networkd[1082]: eth0: Gained carrier Jan 13 20:45:18.754417 systemd-networkd[1082]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:45:18.767425 systemd-networkd[1082]: eth0: DHCPv4 address 172.31.17.145/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 20:45:19.052484 ignition[1007]: Ignition 2.20.0 Jan 13 20:45:19.052495 ignition[1007]: Stage: fetch-offline Jan 13 20:45:19.052673 ignition[1007]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:45:19.052682 ignition[1007]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:45:19.052977 ignition[1007]: Ignition finished successfully Jan 13 20:45:19.060752 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:45:19.067756 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 20:45:19.122705 ignition[1090]: Ignition 2.20.0 Jan 13 20:45:19.122720 ignition[1090]: Stage: fetch Jan 13 20:45:19.126218 ignition[1090]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:45:19.126234 ignition[1090]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:45:19.127315 ignition[1090]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:45:19.192786 ignition[1090]: PUT result: OK Jan 13 20:45:19.211479 ignition[1090]: parsed url from cmdline: "" Jan 13 20:45:19.211494 ignition[1090]: no config URL provided Jan 13 20:45:19.211508 ignition[1090]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:45:19.211523 ignition[1090]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:45:19.211563 ignition[1090]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:45:19.215039 ignition[1090]: PUT result: OK Jan 13 20:45:19.215091 ignition[1090]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 13 20:45:19.218463 ignition[1090]: GET result: OK Jan 13 20:45:19.218590 ignition[1090]: parsing config with SHA512: 9a0b8628bc006f44910214c9bf1d78ded46e97fed1bc00d75a51ff4b38877bcb190b1f9299ec90d926d1bd9ba79e1d4b79f0645cd9007d63f155f791650e269e Jan 13 20:45:19.251639 unknown[1090]: fetched base config from "system" Jan 13 20:45:19.251653 unknown[1090]: fetched base config from "system" Jan 13 20:45:19.252429 ignition[1090]: fetch: fetch complete Jan 13 20:45:19.251661 unknown[1090]: fetched user config from "aws" Jan 13 20:45:19.252438 ignition[1090]: fetch: fetch passed Jan 13 20:45:19.255264 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 20:45:19.252882 ignition[1090]: Ignition finished successfully Jan 13 20:45:19.268611 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:45:19.288840 ignition[1096]: Ignition 2.20.0 Jan 13 20:45:19.288856 ignition[1096]: Stage: kargs Jan 13 20:45:19.289428 ignition[1096]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:45:19.289446 ignition[1096]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:45:19.290192 ignition[1096]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:45:19.293275 ignition[1096]: PUT result: OK Jan 13 20:45:19.299199 ignition[1096]: kargs: kargs passed Jan 13 20:45:19.299277 ignition[1096]: Ignition finished successfully Jan 13 20:45:19.303724 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:45:19.312661 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:45:19.330317 ignition[1102]: Ignition 2.20.0 Jan 13 20:45:19.330350 ignition[1102]: Stage: disks Jan 13 20:45:19.330671 ignition[1102]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:45:19.330680 ignition[1102]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:45:19.330774 ignition[1102]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:45:19.334252 ignition[1102]: PUT result: OK Jan 13 20:45:19.342134 ignition[1102]: disks: disks passed Jan 13 20:45:19.342257 ignition[1102]: Ignition finished successfully Jan 13 20:45:19.346559 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:45:19.350106 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:45:19.364190 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:45:19.367948 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:45:19.382982 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:45:19.391469 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:45:19.405718 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:45:19.472556 systemd-fsck[1110]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 20:45:19.475887 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:45:19.487649 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:45:19.606353 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 84bcd1b2-5573-4e91-8fd5-f97782397085 r/w with ordered data mode. Quota mode: none. Jan 13 20:45:19.606532 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:45:19.607349 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:45:19.618507 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:45:19.623881 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:45:19.625883 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:45:19.625947 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:45:19.625981 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:45:19.641645 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:45:19.653300 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:45:19.665651 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1129) Jan 13 20:45:19.668700 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:45:19.668759 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:45:19.671373 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:45:19.688371 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:45:19.694452 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:45:20.165500 systemd-networkd[1082]: eth0: Gained IPv6LL Jan 13 20:45:20.198505 initrd-setup-root[1153]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:45:20.223337 initrd-setup-root[1160]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:45:20.238178 initrd-setup-root[1167]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:45:20.243791 initrd-setup-root[1174]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:45:20.588798 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:45:20.602605 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:45:20.610355 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:45:20.623916 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:45:20.625622 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:45:20.653470 ignition[1246]: INFO : Ignition 2.20.0 Jan 13 20:45:20.653470 ignition[1246]: INFO : Stage: mount Jan 13 20:45:20.655871 ignition[1246]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:45:20.655871 ignition[1246]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:45:20.655871 ignition[1246]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:45:20.660435 ignition[1246]: INFO : PUT result: OK Jan 13 20:45:20.664911 ignition[1246]: INFO : mount: mount passed Jan 13 20:45:20.666736 ignition[1246]: INFO : Ignition finished successfully Jan 13 20:45:20.669868 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:45:20.682467 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:45:20.685441 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:45:20.700702 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:45:20.724354 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1259) Jan 13 20:45:20.724435 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:45:20.726548 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:45:20.726622 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:45:20.734364 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:45:20.737631 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:45:20.763790 ignition[1276]: INFO : Ignition 2.20.0 Jan 13 20:45:20.763790 ignition[1276]: INFO : Stage: files Jan 13 20:45:20.771793 ignition[1276]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:45:20.771793 ignition[1276]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:45:20.771793 ignition[1276]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:45:20.771793 ignition[1276]: INFO : PUT result: OK Jan 13 20:45:20.787516 ignition[1276]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:45:20.793821 ignition[1276]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:45:20.793821 ignition[1276]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:45:20.813479 ignition[1276]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:45:20.817078 ignition[1276]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:45:20.821365 unknown[1276]: wrote ssh authorized keys file for user: core Jan 13 20:45:20.823259 ignition[1276]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:45:20.829087 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:45:20.835066 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 20:45:20.920959 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 20:45:21.102568 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:45:21.106286 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:45:21.106286 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 13 20:45:21.446989 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 20:45:21.570740 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:45:21.573232 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:45:21.573232 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:45:21.582448 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:45:21.582448 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:45:21.582448 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:45:21.582448 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:45:21.582448 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:45:21.582448 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:45:21.582448 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:45:21.582448 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:45:21.582448 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:45:21.582448 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:45:21.582448 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:45:21.582448 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 13 20:45:21.851197 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 13 20:45:22.167140 ignition[1276]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:45:22.167140 ignition[1276]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 13 20:45:22.172307 ignition[1276]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:45:22.177757 ignition[1276]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:45:22.177757 ignition[1276]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 13 20:45:22.177757 ignition[1276]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:45:22.177757 ignition[1276]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:45:22.177757 ignition[1276]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:45:22.177757 ignition[1276]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:45:22.177757 ignition[1276]: INFO : files: files passed Jan 13 20:45:22.177757 ignition[1276]: INFO : Ignition finished successfully Jan 13 20:45:22.178861 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:45:22.217871 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:45:22.220858 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:45:22.230805 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:45:22.232473 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:45:22.244944 initrd-setup-root-after-ignition[1305]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:45:22.244944 initrd-setup-root-after-ignition[1305]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:45:22.250433 initrd-setup-root-after-ignition[1309]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:45:22.254236 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:45:22.258630 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:45:22.270568 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:45:22.332612 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:45:22.332754 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:45:22.341598 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:45:22.347973 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:45:22.355022 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:45:22.362570 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:45:22.400984 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:45:22.408557 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:45:22.423417 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:45:22.426486 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:45:22.432793 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:45:22.437066 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:45:22.437291 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:45:22.446146 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:45:22.450274 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:45:22.458406 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:45:22.464227 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:45:22.474054 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:45:22.477277 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:45:22.480464 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:45:22.484969 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:45:22.489386 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:45:22.492654 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:45:22.510202 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:45:22.510548 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:45:22.518295 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:45:22.522617 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:45:22.529148 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:45:22.529350 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:45:22.544538 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:45:22.544883 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:45:22.561399 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:45:22.566461 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:45:22.575868 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:45:22.576129 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:45:22.591557 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:45:22.602691 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:45:22.604264 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:45:22.604509 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:45:22.606584 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:45:22.606757 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:45:22.615765 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:45:22.615909 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:45:22.642474 ignition[1329]: INFO : Ignition 2.20.0 Jan 13 20:45:22.642474 ignition[1329]: INFO : Stage: umount Jan 13 20:45:22.642474 ignition[1329]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:45:22.642474 ignition[1329]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:45:22.642474 ignition[1329]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:45:22.641851 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:45:22.656093 ignition[1329]: INFO : PUT result: OK Jan 13 20:45:22.651098 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:45:22.651198 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:45:22.659992 ignition[1329]: INFO : umount: umount passed Jan 13 20:45:22.659992 ignition[1329]: INFO : Ignition finished successfully Jan 13 20:45:22.663444 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:45:22.663575 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:45:22.668286 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:45:22.668409 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:45:22.672571 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:45:22.672681 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:45:22.688667 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 20:45:22.688740 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 20:45:22.688858 systemd[1]: Stopped target network.target - Network. Jan 13 20:45:22.701728 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:45:22.702011 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:45:22.712875 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:45:22.716275 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:45:22.719507 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:45:22.721574 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:45:22.725968 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:45:22.728919 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:45:22.728985 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:45:22.731739 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:45:22.731809 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:45:22.735396 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:45:22.735501 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:45:22.740567 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:45:22.740639 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:45:22.742544 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:45:22.742648 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:45:22.743931 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:45:22.751447 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:45:22.760382 systemd-networkd[1082]: eth0: DHCPv6 lease lost Jan 13 20:45:22.761036 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:45:22.761144 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:45:22.768424 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:45:22.768676 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:45:22.772680 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:45:22.772782 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:45:22.781589 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:45:22.783632 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:45:22.783729 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:45:22.791869 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:45:22.792014 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:45:22.799640 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:45:22.799855 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:45:22.803747 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:45:22.803879 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:45:22.822008 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:45:22.842662 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:45:22.842774 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:45:22.851864 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:45:22.853216 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:45:22.857561 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:45:22.857640 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:45:22.861876 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:45:22.861932 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:45:22.868289 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:45:22.868375 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:45:22.879554 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:45:22.879642 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:45:22.884781 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:45:22.884862 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:45:22.898412 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:45:22.903901 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:45:22.904018 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:45:22.909482 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 20:45:22.909578 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:45:22.913264 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:45:22.913370 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:45:22.915712 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:45:22.915790 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:45:22.921970 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:45:22.922240 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:45:22.924233 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:45:22.948618 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:45:23.003624 systemd[1]: Switching root. Jan 13 20:45:23.030927 systemd-journald[179]: Journal stopped Jan 13 20:45:25.458234 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Jan 13 20:45:25.460374 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:45:25.460410 kernel: SELinux: policy capability open_perms=1 Jan 13 20:45:25.460439 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:45:25.460461 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:45:25.460482 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:45:25.460499 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:45:25.460517 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:45:25.460534 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:45:25.460551 kernel: audit: type=1403 audit(1736801123.587:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:45:25.460570 systemd[1]: Successfully loaded SELinux policy in 61.915ms. Jan 13 20:45:25.460603 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 34.128ms. Jan 13 20:45:25.460626 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:45:25.460648 systemd[1]: Detected virtualization amazon. Jan 13 20:45:25.460670 systemd[1]: Detected architecture x86-64. Jan 13 20:45:25.460689 systemd[1]: Detected first boot. Jan 13 20:45:25.460709 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:45:25.460730 zram_generator::config[1371]: No configuration found. Jan 13 20:45:25.460752 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:45:25.460773 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:45:25.460798 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:45:25.460820 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:45:25.460840 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:45:25.460859 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:45:25.460879 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:45:25.460899 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:45:25.460925 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:45:25.460951 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:45:25.460979 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:45:25.461010 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:45:25.461035 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:45:25.461061 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:45:25.461087 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:45:25.461208 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:45:25.461239 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:45:25.461266 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:45:25.461291 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 20:45:25.461359 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:45:25.461386 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:45:25.461411 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:45:25.461437 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:45:25.461461 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:45:25.461486 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:45:25.461512 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:45:25.461538 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:45:25.461565 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:45:25.461589 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:45:25.461614 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:45:25.461639 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:45:25.461662 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:45:25.461686 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:45:25.461710 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:45:25.461734 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:45:25.461760 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:45:25.461788 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:45:25.461813 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:45:25.461837 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:45:25.461862 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:45:25.461886 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:45:25.461911 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:45:25.461936 systemd[1]: Reached target machines.target - Containers. Jan 13 20:45:25.461961 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:45:25.461989 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:45:25.462014 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:45:25.462040 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:45:25.462064 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:45:25.462089 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:45:25.462114 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:45:25.462139 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:45:25.462163 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:45:25.462188 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:45:25.462216 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:45:25.462243 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:45:25.462266 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:45:25.462289 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:45:25.462315 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:45:25.462627 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:45:25.462649 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:45:25.462668 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:45:25.462686 kernel: fuse: init (API version 7.39) Jan 13 20:45:25.462712 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:45:25.462734 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:45:25.462755 systemd[1]: Stopped verity-setup.service. Jan 13 20:45:25.462777 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:45:25.462799 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:45:25.462822 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:45:25.462844 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:45:25.462865 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:45:25.462890 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:45:25.462911 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:45:25.462933 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:45:25.462955 kernel: loop: module loaded Jan 13 20:45:25.462976 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:45:25.463001 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:45:25.463022 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:45:25.463044 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:45:25.463063 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:45:25.463084 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:45:25.463106 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:45:25.463128 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:45:25.463154 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:45:25.463213 systemd-journald[1446]: Collecting audit messages is disabled. Jan 13 20:45:25.463263 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:45:25.463285 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:45:25.463307 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:45:25.464401 systemd-journald[1446]: Journal started Jan 13 20:45:25.464454 systemd-journald[1446]: Runtime Journal (/run/log/journal/ec21360f618e7bf073126963f0b4025f) is 4.8M, max 38.6M, 33.7M free. Jan 13 20:45:24.911503 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:45:24.967572 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 13 20:45:24.968000 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:45:25.469757 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:45:25.472679 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:45:25.502429 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:45:25.513462 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:45:25.521416 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:45:25.523006 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:45:25.523055 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:45:25.529620 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:45:25.537602 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:45:25.556696 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:45:25.561477 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:45:25.597578 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:45:25.606426 kernel: ACPI: bus type drm_connector registered Jan 13 20:45:25.605508 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:45:25.607428 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:45:25.610576 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:45:25.612224 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:45:25.615557 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:45:25.617900 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:45:25.621642 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:45:25.625656 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:45:25.625894 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:45:25.627894 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:45:25.630126 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:45:25.632065 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:45:25.666759 systemd-journald[1446]: Time spent on flushing to /var/log/journal/ec21360f618e7bf073126963f0b4025f is 92.848ms for 964 entries. Jan 13 20:45:25.666759 systemd-journald[1446]: System Journal (/var/log/journal/ec21360f618e7bf073126963f0b4025f) is 8.0M, max 195.6M, 187.6M free. Jan 13 20:45:25.787604 systemd-journald[1446]: Received client request to flush runtime journal. Jan 13 20:45:25.787683 kernel: loop0: detected capacity change from 0 to 62848 Jan 13 20:45:25.787717 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:45:25.669547 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:45:25.689929 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:45:25.692099 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:45:25.701727 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:45:25.732644 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:45:25.752531 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:45:25.795959 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:45:25.815492 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:45:25.818424 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:45:25.820770 systemd-tmpfiles[1496]: ACLs are not supported, ignoring. Jan 13 20:45:25.822394 systemd-tmpfiles[1496]: ACLs are not supported, ignoring. Jan 13 20:45:25.823910 udevadm[1507]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 20:45:25.828420 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:45:25.835946 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:45:25.849810 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:45:25.853379 kernel: loop1: detected capacity change from 0 to 138184 Jan 13 20:45:25.936866 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:45:25.947409 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:45:25.989359 kernel: loop2: detected capacity change from 0 to 211296 Jan 13 20:45:25.996063 systemd-tmpfiles[1521]: ACLs are not supported, ignoring. Jan 13 20:45:25.996488 systemd-tmpfiles[1521]: ACLs are not supported, ignoring. Jan 13 20:45:26.004590 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:45:26.197358 kernel: loop3: detected capacity change from 0 to 140992 Jan 13 20:45:26.332354 kernel: loop4: detected capacity change from 0 to 62848 Jan 13 20:45:26.367779 kernel: loop5: detected capacity change from 0 to 138184 Jan 13 20:45:26.407373 kernel: loop6: detected capacity change from 0 to 211296 Jan 13 20:45:26.443354 kernel: loop7: detected capacity change from 0 to 140992 Jan 13 20:45:26.477295 (sd-merge)[1527]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 13 20:45:26.479619 (sd-merge)[1527]: Merged extensions into '/usr'. Jan 13 20:45:26.485458 systemd[1]: Reloading requested from client PID 1495 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:45:26.485788 systemd[1]: Reloading... Jan 13 20:45:26.603315 zram_generator::config[1550]: No configuration found. Jan 13 20:45:26.823949 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:45:26.886121 systemd[1]: Reloading finished in 399 ms. Jan 13 20:45:26.927528 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:45:26.929950 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:45:26.946914 systemd[1]: Starting ensure-sysext.service... Jan 13 20:45:26.950848 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:45:26.956547 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:45:26.971758 systemd[1]: Reloading requested from client PID 1602 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:45:26.971771 systemd[1]: Reloading... Jan 13 20:45:27.006115 systemd-tmpfiles[1603]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:45:27.008368 systemd-tmpfiles[1603]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:45:27.011107 systemd-tmpfiles[1603]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:45:27.011753 systemd-tmpfiles[1603]: ACLs are not supported, ignoring. Jan 13 20:45:27.012878 systemd-tmpfiles[1603]: ACLs are not supported, ignoring. Jan 13 20:45:27.026274 systemd-udevd[1604]: Using default interface naming scheme 'v255'. Jan 13 20:45:27.028273 systemd-tmpfiles[1603]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:45:27.028284 systemd-tmpfiles[1603]: Skipping /boot Jan 13 20:45:27.044015 systemd-tmpfiles[1603]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:45:27.044200 systemd-tmpfiles[1603]: Skipping /boot Jan 13 20:45:27.100675 zram_generator::config[1634]: No configuration found. Jan 13 20:45:27.297236 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:45:27.323225 (udev-worker)[1682]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:45:27.446354 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 13 20:45:27.459351 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jan 13 20:45:27.467883 systemd[1]: Reloading finished in 495 ms. Jan 13 20:45:27.485607 kernel: ACPI: button: Power Button [PWRF] Jan 13 20:45:27.485682 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Jan 13 20:45:27.487867 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Jan 13 20:45:27.500854 ldconfig[1487]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:45:27.514358 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1679) Jan 13 20:45:27.522888 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:45:27.525692 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:45:27.527616 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:45:27.532123 kernel: ACPI: button: Sleep Button [SLPF] Jan 13 20:45:27.553036 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 20:45:27.567705 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:45:27.575549 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:45:27.585226 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:45:27.600724 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:45:27.611604 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:45:27.618601 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:45:27.638284 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:45:27.638710 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:45:27.647469 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:45:27.655240 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:45:27.660502 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:45:27.662116 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:45:27.674545 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:45:27.676184 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:45:27.677715 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:45:27.678450 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:45:27.699917 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:45:27.700304 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:45:27.709350 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:45:27.711441 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:45:27.722812 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:45:27.724433 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:45:27.732817 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:45:27.746483 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:45:27.747135 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:45:27.756677 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:45:27.758718 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:45:27.759680 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:45:27.761371 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:45:27.764151 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:45:27.764970 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:45:27.767304 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:45:27.768525 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:45:27.772928 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:45:27.774258 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 20:45:27.781372 systemd[1]: Finished ensure-sysext.service. Jan 13 20:45:27.793729 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:45:27.795387 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:45:27.799171 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:45:27.825661 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:45:27.825872 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:45:27.832393 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:45:27.846686 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:45:27.860157 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:45:27.885521 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:45:27.912350 augenrules[1820]: No rules Jan 13 20:45:27.915457 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:45:27.915707 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:45:27.921399 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:45:27.923712 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:45:28.031771 systemd-networkd[1729]: lo: Link UP Jan 13 20:45:28.032163 systemd-networkd[1729]: lo: Gained carrier Jan 13 20:45:28.034131 systemd-networkd[1729]: Enumeration completed Jan 13 20:45:28.034439 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:45:28.035017 systemd-networkd[1729]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:45:28.035108 systemd-networkd[1729]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:45:28.037645 systemd-networkd[1729]: eth0: Link UP Jan 13 20:45:28.038016 systemd-networkd[1729]: eth0: Gained carrier Jan 13 20:45:28.038114 systemd-networkd[1729]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:45:28.055415 systemd-networkd[1729]: eth0: DHCPv4 address 172.31.17.145/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 20:45:28.080813 systemd-resolved[1733]: Positive Trust Anchors: Jan 13 20:45:28.080834 systemd-resolved[1733]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:45:28.080897 systemd-resolved[1733]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:45:28.089181 systemd-resolved[1733]: Defaulting to hostname 'linux'. Jan 13 20:45:28.168957 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:45:28.170824 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:45:28.172794 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:45:28.180767 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 20:45:28.182936 systemd[1]: Reached target network.target - Network. Jan 13 20:45:28.187377 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:45:28.195541 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:45:28.200388 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:45:28.206591 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:45:28.232270 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:45:28.234896 lvm[1852]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:45:28.262638 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:45:28.265028 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:45:28.267406 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:45:28.269421 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:45:28.272133 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:45:28.277783 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:45:28.279340 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:45:28.281104 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:45:28.282836 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:45:28.282865 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:45:28.284098 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:45:28.286004 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:45:28.292030 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:45:28.298899 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:45:28.304408 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:45:28.306977 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:45:28.308754 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:45:28.310211 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:45:28.311593 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:45:28.311636 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:45:28.318489 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:45:28.323557 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 20:45:28.330818 lvm[1860]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:45:28.333570 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:45:28.341593 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:45:28.345457 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:45:28.347489 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:45:28.353677 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:45:28.366558 systemd[1]: Started ntpd.service - Network Time Service. Jan 13 20:45:28.369496 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:45:28.373501 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 13 20:45:28.378562 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:45:28.389583 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:45:28.403186 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:45:28.405550 jq[1864]: false Jan 13 20:45:28.405221 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:45:28.405889 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:45:28.413672 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:45:28.419498 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:45:28.422269 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:45:28.431982 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:45:28.432263 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:45:28.463222 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:45:28.521387 jq[1874]: true Jan 13 20:45:28.511835 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:45:28.512197 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:45:28.554352 extend-filesystems[1865]: Found loop4 Jan 13 20:45:28.554352 extend-filesystems[1865]: Found loop5 Jan 13 20:45:28.554352 extend-filesystems[1865]: Found loop6 Jan 13 20:45:28.554352 extend-filesystems[1865]: Found loop7 Jan 13 20:45:28.554352 extend-filesystems[1865]: Found nvme0n1 Jan 13 20:45:28.554352 extend-filesystems[1865]: Found nvme0n1p1 Jan 13 20:45:28.554352 extend-filesystems[1865]: Found nvme0n1p2 Jan 13 20:45:28.554352 extend-filesystems[1865]: Found nvme0n1p3 Jan 13 20:45:28.554352 extend-filesystems[1865]: Found usr Jan 13 20:45:28.566993 extend-filesystems[1865]: Found nvme0n1p4 Jan 13 20:45:28.566993 extend-filesystems[1865]: Found nvme0n1p6 Jan 13 20:45:28.566993 extend-filesystems[1865]: Found nvme0n1p7 Jan 13 20:45:28.566993 extend-filesystems[1865]: Found nvme0n1p9 Jan 13 20:45:28.566993 extend-filesystems[1865]: Checking size of /dev/nvme0n1p9 Jan 13 20:45:28.597739 tar[1878]: linux-amd64/helm Jan 13 20:45:28.599722 update_engine[1873]: I20250113 20:45:28.596755 1873 main.cc:92] Flatcar Update Engine starting Jan 13 20:45:28.600684 (ntainerd)[1893]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:45:28.607462 ntpd[1867]: 13 Jan 20:45:28 ntpd[1867]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 18:29:22 UTC 2025 (1): Starting Jan 13 20:45:28.607462 ntpd[1867]: 13 Jan 20:45:28 ntpd[1867]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 20:45:28.607462 ntpd[1867]: 13 Jan 20:45:28 ntpd[1867]: ---------------------------------------------------- Jan 13 20:45:28.607462 ntpd[1867]: 13 Jan 20:45:28 ntpd[1867]: ntp-4 is maintained by Network Time Foundation, Jan 13 20:45:28.607462 ntpd[1867]: 13 Jan 20:45:28 ntpd[1867]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 20:45:28.607462 ntpd[1867]: 13 Jan 20:45:28 ntpd[1867]: corporation. Support and training for ntp-4 are Jan 13 20:45:28.607462 ntpd[1867]: 13 Jan 20:45:28 ntpd[1867]: available at https://www.nwtime.org/support Jan 13 20:45:28.607462 ntpd[1867]: 13 Jan 20:45:28 ntpd[1867]: ---------------------------------------------------- Jan 13 20:45:28.602441 ntpd[1867]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 18:29:22 UTC 2025 (1): Starting Jan 13 20:45:28.605545 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:45:28.617654 ntpd[1867]: 13 Jan 20:45:28 ntpd[1867]: proto: precision = 0.076 usec (-24) Jan 13 20:45:28.602467 ntpd[1867]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 20:45:28.616765 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:45:28.602478 ntpd[1867]: ---------------------------------------------------- Jan 13 20:45:28.616817 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:45:28.602487 ntpd[1867]: ntp-4 is maintained by Network Time Foundation, Jan 13 20:45:28.602497 ntpd[1867]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 20:45:28.602506 ntpd[1867]: corporation. Support and training for ntp-4 are Jan 13 20:45:28.602515 ntpd[1867]: available at https://www.nwtime.org/support Jan 13 20:45:28.602525 ntpd[1867]: ---------------------------------------------------- Jan 13 20:45:28.605126 dbus-daemon[1863]: [system] SELinux support is enabled Jan 13 20:45:28.616274 ntpd[1867]: proto: precision = 0.076 usec (-24) Jan 13 20:45:28.633726 ntpd[1867]: 13 Jan 20:45:28 ntpd[1867]: basedate set to 2025-01-01 Jan 13 20:45:28.633726 ntpd[1867]: 13 Jan 20:45:28 ntpd[1867]: gps base set to 2025-01-05 (week 2348) Jan 13 20:45:28.633726 ntpd[1867]: 13 Jan 20:45:28 ntpd[1867]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 20:45:28.633726 ntpd[1867]: 13 Jan 20:45:28 ntpd[1867]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 20:45:28.618937 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:45:28.625470 ntpd[1867]: basedate set to 2025-01-01 Jan 13 20:45:28.618966 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:45:28.625496 ntpd[1867]: gps base set to 2025-01-05 (week 2348) Jan 13 20:45:28.633110 ntpd[1867]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 20:45:28.633175 ntpd[1867]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 20:45:28.640527 ntpd[1867]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 20:45:28.647875 ntpd[1867]: 13 Jan 20:45:28 ntpd[1867]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 20:45:28.647875 ntpd[1867]: 13 Jan 20:45:28 ntpd[1867]: Listen normally on 3 eth0 172.31.17.145:123 Jan 13 20:45:28.647875 ntpd[1867]: 13 Jan 20:45:28 ntpd[1867]: Listen normally on 4 lo [::1]:123 Jan 13 20:45:28.647875 ntpd[1867]: 13 Jan 20:45:28 ntpd[1867]: bind(21) AF_INET6 fe80::45c:28ff:fe24:12af%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:45:28.647875 ntpd[1867]: 13 Jan 20:45:28 ntpd[1867]: unable to create socket on eth0 (5) for fe80::45c:28ff:fe24:12af%2#123 Jan 13 20:45:28.647875 ntpd[1867]: 13 Jan 20:45:28 ntpd[1867]: failed to init interface for address fe80::45c:28ff:fe24:12af%2 Jan 13 20:45:28.647875 ntpd[1867]: 13 Jan 20:45:28 ntpd[1867]: Listening on routing socket on fd #21 for interface updates Jan 13 20:45:28.640583 ntpd[1867]: Listen normally on 3 eth0 172.31.17.145:123 Jan 13 20:45:28.640627 ntpd[1867]: Listen normally on 4 lo [::1]:123 Jan 13 20:45:28.640682 ntpd[1867]: bind(21) AF_INET6 fe80::45c:28ff:fe24:12af%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:45:28.640704 ntpd[1867]: unable to create socket on eth0 (5) for fe80::45c:28ff:fe24:12af%2#123 Jan 13 20:45:28.640721 ntpd[1867]: failed to init interface for address fe80::45c:28ff:fe24:12af%2 Jan 13 20:45:28.640757 ntpd[1867]: Listening on routing socket on fd #21 for interface updates Jan 13 20:45:28.657092 jq[1892]: true Jan 13 20:45:28.654131 dbus-daemon[1863]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1729 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 13 20:45:28.671175 update_engine[1873]: I20250113 20:45:28.665002 1873 update_check_scheduler.cc:74] Next update check in 10m26s Jan 13 20:45:28.671252 extend-filesystems[1865]: Resized partition /dev/nvme0n1p9 Jan 13 20:45:28.678600 ntpd[1867]: 13 Jan 20:45:28 ntpd[1867]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:45:28.678600 ntpd[1867]: 13 Jan 20:45:28 ntpd[1867]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:45:28.674742 ntpd[1867]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:45:28.674780 ntpd[1867]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:45:28.683029 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:45:28.685222 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:45:28.701970 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:45:28.702762 extend-filesystems[1913]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:45:28.709575 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 13 20:45:28.714537 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:45:28.737356 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 13 20:45:28.745710 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 13 20:45:28.775280 systemd-logind[1872]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 20:45:28.775316 systemd-logind[1872]: Watching system buttons on /dev/input/event3 (Sleep Button) Jan 13 20:45:28.779479 systemd-logind[1872]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 20:45:28.784461 systemd-logind[1872]: New seat seat0. Jan 13 20:45:28.787504 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:45:28.807054 coreos-metadata[1862]: Jan 13 20:45:28.806 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 20:45:28.808459 coreos-metadata[1862]: Jan 13 20:45:28.808 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 13 20:45:28.809236 coreos-metadata[1862]: Jan 13 20:45:28.809 INFO Fetch successful Jan 13 20:45:28.809462 coreos-metadata[1862]: Jan 13 20:45:28.809 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 13 20:45:28.810362 coreos-metadata[1862]: Jan 13 20:45:28.810 INFO Fetch successful Jan 13 20:45:28.810532 coreos-metadata[1862]: Jan 13 20:45:28.810 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 13 20:45:28.811238 coreos-metadata[1862]: Jan 13 20:45:28.811 INFO Fetch successful Jan 13 20:45:28.811526 coreos-metadata[1862]: Jan 13 20:45:28.811 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 13 20:45:28.811976 coreos-metadata[1862]: Jan 13 20:45:28.811 INFO Fetch successful Jan 13 20:45:28.812303 coreos-metadata[1862]: Jan 13 20:45:28.812 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 13 20:45:28.812869 coreos-metadata[1862]: Jan 13 20:45:28.812 INFO Fetch failed with 404: resource not found Jan 13 20:45:28.812869 coreos-metadata[1862]: Jan 13 20:45:28.812 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 13 20:45:28.813730 coreos-metadata[1862]: Jan 13 20:45:28.813 INFO Fetch successful Jan 13 20:45:28.813730 coreos-metadata[1862]: Jan 13 20:45:28.813 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 13 20:45:28.814549 coreos-metadata[1862]: Jan 13 20:45:28.814 INFO Fetch successful Jan 13 20:45:28.814549 coreos-metadata[1862]: Jan 13 20:45:28.814 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 13 20:45:28.814964 coreos-metadata[1862]: Jan 13 20:45:28.814 INFO Fetch successful Jan 13 20:45:28.814964 coreos-metadata[1862]: Jan 13 20:45:28.814 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 13 20:45:28.815620 coreos-metadata[1862]: Jan 13 20:45:28.815 INFO Fetch successful Jan 13 20:45:28.815764 coreos-metadata[1862]: Jan 13 20:45:28.815 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 13 20:45:28.818997 coreos-metadata[1862]: Jan 13 20:45:28.817 INFO Fetch successful Jan 13 20:45:28.872924 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1680) Jan 13 20:45:28.895906 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 20:45:28.898032 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:45:29.024938 dbus-daemon[1863]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 13 20:45:29.026231 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 13 20:45:29.055050 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 13 20:45:29.037839 systemd[1]: Starting polkit.service - Authorization Manager... Jan 13 20:45:29.028861 dbus-daemon[1863]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1917 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 13 20:45:29.060899 extend-filesystems[1913]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 13 20:45:29.060899 extend-filesystems[1913]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:45:29.060899 extend-filesystems[1913]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 13 20:45:29.060745 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:45:29.075192 bash[1940]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:45:29.075391 extend-filesystems[1865]: Resized filesystem in /dev/nvme0n1p9 Jan 13 20:45:29.063312 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:45:29.066400 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:45:29.102059 polkitd[1979]: Started polkitd version 121 Jan 13 20:45:29.102729 systemd[1]: Starting sshkeys.service... Jan 13 20:45:29.121801 polkitd[1979]: Loading rules from directory /etc/polkit-1/rules.d Jan 13 20:45:29.125684 polkitd[1979]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 13 20:45:29.126549 polkitd[1979]: Finished loading, compiling and executing 2 rules Jan 13 20:45:29.127400 dbus-daemon[1863]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 13 20:45:29.127677 systemd[1]: Started polkit.service - Authorization Manager. Jan 13 20:45:29.131116 polkitd[1979]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 13 20:45:29.170073 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 20:45:29.195357 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 20:45:29.205461 systemd-hostnamed[1917]: Hostname set to (transient) Jan 13 20:45:29.205602 systemd-resolved[1733]: System hostname changed to 'ip-172-31-17-145'. Jan 13 20:45:29.299524 locksmithd[1918]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:45:29.454470 coreos-metadata[2010]: Jan 13 20:45:29.454 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 20:45:29.459343 coreos-metadata[2010]: Jan 13 20:45:29.456 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 13 20:45:29.459343 coreos-metadata[2010]: Jan 13 20:45:29.457 INFO Fetch successful Jan 13 20:45:29.459343 coreos-metadata[2010]: Jan 13 20:45:29.457 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 20:45:29.459343 coreos-metadata[2010]: Jan 13 20:45:29.458 INFO Fetch successful Jan 13 20:45:29.464144 unknown[2010]: wrote ssh authorized keys file for user: core Jan 13 20:45:29.538212 update-ssh-keys[2061]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:45:29.534163 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 20:45:29.550149 systemd[1]: Finished sshkeys.service. Jan 13 20:45:29.602929 ntpd[1867]: bind(24) AF_INET6 fe80::45c:28ff:fe24:12af%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:45:29.602981 ntpd[1867]: unable to create socket on eth0 (6) for fe80::45c:28ff:fe24:12af%2#123 Jan 13 20:45:29.603337 ntpd[1867]: 13 Jan 20:45:29 ntpd[1867]: bind(24) AF_INET6 fe80::45c:28ff:fe24:12af%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:45:29.603337 ntpd[1867]: 13 Jan 20:45:29 ntpd[1867]: unable to create socket on eth0 (6) for fe80::45c:28ff:fe24:12af%2#123 Jan 13 20:45:29.603337 ntpd[1867]: 13 Jan 20:45:29 ntpd[1867]: failed to init interface for address fe80::45c:28ff:fe24:12af%2 Jan 13 20:45:29.602996 ntpd[1867]: failed to init interface for address fe80::45c:28ff:fe24:12af%2 Jan 13 20:45:29.669739 containerd[1893]: time="2025-01-13T20:45:29.669617313Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:45:29.780960 containerd[1893]: time="2025-01-13T20:45:29.780838207Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:45:29.786616 containerd[1893]: time="2025-01-13T20:45:29.786555791Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:45:29.788202 containerd[1893]: time="2025-01-13T20:45:29.786768362Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:45:29.788202 containerd[1893]: time="2025-01-13T20:45:29.786804473Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:45:29.788202 containerd[1893]: time="2025-01-13T20:45:29.786989935Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:45:29.788202 containerd[1893]: time="2025-01-13T20:45:29.787011683Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:45:29.788202 containerd[1893]: time="2025-01-13T20:45:29.787082645Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:45:29.788202 containerd[1893]: time="2025-01-13T20:45:29.787110415Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:45:29.788202 containerd[1893]: time="2025-01-13T20:45:29.787343441Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:45:29.788202 containerd[1893]: time="2025-01-13T20:45:29.787364739Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:45:29.788202 containerd[1893]: time="2025-01-13T20:45:29.787382427Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:45:29.788202 containerd[1893]: time="2025-01-13T20:45:29.787397039Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:45:29.788202 containerd[1893]: time="2025-01-13T20:45:29.787482278Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:45:29.788202 containerd[1893]: time="2025-01-13T20:45:29.787729241Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:45:29.788716 containerd[1893]: time="2025-01-13T20:45:29.787877048Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:45:29.788716 containerd[1893]: time="2025-01-13T20:45:29.787895738Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:45:29.788716 containerd[1893]: time="2025-01-13T20:45:29.787991493Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:45:29.788716 containerd[1893]: time="2025-01-13T20:45:29.788055345Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:45:29.799350 containerd[1893]: time="2025-01-13T20:45:29.796704092Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:45:29.799350 containerd[1893]: time="2025-01-13T20:45:29.796786594Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:45:29.799350 containerd[1893]: time="2025-01-13T20:45:29.796811162Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:45:29.799350 containerd[1893]: time="2025-01-13T20:45:29.796848903Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:45:29.799350 containerd[1893]: time="2025-01-13T20:45:29.796868863Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:45:29.799350 containerd[1893]: time="2025-01-13T20:45:29.797054328Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:45:29.799350 containerd[1893]: time="2025-01-13T20:45:29.797386791Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:45:29.799350 containerd[1893]: time="2025-01-13T20:45:29.797502370Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:45:29.799350 containerd[1893]: time="2025-01-13T20:45:29.797522341Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:45:29.799350 containerd[1893]: time="2025-01-13T20:45:29.797540978Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:45:29.799350 containerd[1893]: time="2025-01-13T20:45:29.797558573Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:45:29.799350 containerd[1893]: time="2025-01-13T20:45:29.797577052Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:45:29.799350 containerd[1893]: time="2025-01-13T20:45:29.797593223Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:45:29.799350 containerd[1893]: time="2025-01-13T20:45:29.797613763Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:45:29.799900 containerd[1893]: time="2025-01-13T20:45:29.797634330Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:45:29.799900 containerd[1893]: time="2025-01-13T20:45:29.797654222Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:45:29.799900 containerd[1893]: time="2025-01-13T20:45:29.797682832Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:45:29.799900 containerd[1893]: time="2025-01-13T20:45:29.797700483Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:45:29.799900 containerd[1893]: time="2025-01-13T20:45:29.797728188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:45:29.799900 containerd[1893]: time="2025-01-13T20:45:29.797749036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:45:29.799900 containerd[1893]: time="2025-01-13T20:45:29.797767456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:45:29.799900 containerd[1893]: time="2025-01-13T20:45:29.797786768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:45:29.799900 containerd[1893]: time="2025-01-13T20:45:29.797803711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:45:29.799900 containerd[1893]: time="2025-01-13T20:45:29.797822718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:45:29.799900 containerd[1893]: time="2025-01-13T20:45:29.797839460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:45:29.799900 containerd[1893]: time="2025-01-13T20:45:29.797859062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:45:29.799900 containerd[1893]: time="2025-01-13T20:45:29.797877623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:45:29.799900 containerd[1893]: time="2025-01-13T20:45:29.797898593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:45:29.800425 containerd[1893]: time="2025-01-13T20:45:29.797916780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:45:29.800425 containerd[1893]: time="2025-01-13T20:45:29.797933736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:45:29.800425 containerd[1893]: time="2025-01-13T20:45:29.797952258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:45:29.800425 containerd[1893]: time="2025-01-13T20:45:29.797973483Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:45:29.800425 containerd[1893]: time="2025-01-13T20:45:29.798002946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:45:29.800425 containerd[1893]: time="2025-01-13T20:45:29.798021149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:45:29.800425 containerd[1893]: time="2025-01-13T20:45:29.798037449Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:45:29.800425 containerd[1893]: time="2025-01-13T20:45:29.798099994Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:45:29.800425 containerd[1893]: time="2025-01-13T20:45:29.798126060Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:45:29.800425 containerd[1893]: time="2025-01-13T20:45:29.798141553Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:45:29.800425 containerd[1893]: time="2025-01-13T20:45:29.798159560Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:45:29.800425 containerd[1893]: time="2025-01-13T20:45:29.798173939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:45:29.800425 containerd[1893]: time="2025-01-13T20:45:29.798190307Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:45:29.800425 containerd[1893]: time="2025-01-13T20:45:29.798204050Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:45:29.800928 containerd[1893]: time="2025-01-13T20:45:29.798217820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:45:29.805588 containerd[1893]: time="2025-01-13T20:45:29.803065063Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:45:29.805588 containerd[1893]: time="2025-01-13T20:45:29.803151631Z" level=info msg="Connect containerd service" Jan 13 20:45:29.805588 containerd[1893]: time="2025-01-13T20:45:29.803215791Z" level=info msg="using legacy CRI server" Jan 13 20:45:29.805588 containerd[1893]: time="2025-01-13T20:45:29.803225754Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:45:29.805588 containerd[1893]: time="2025-01-13T20:45:29.803417741Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:45:29.806835 containerd[1893]: time="2025-01-13T20:45:29.806795292Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:45:29.807936 containerd[1893]: time="2025-01-13T20:45:29.807883232Z" level=info msg="Start subscribing containerd event" Jan 13 20:45:29.808000 containerd[1893]: time="2025-01-13T20:45:29.807961796Z" level=info msg="Start recovering state" Jan 13 20:45:29.808089 containerd[1893]: time="2025-01-13T20:45:29.808071865Z" level=info msg="Start event monitor" Jan 13 20:45:29.808130 containerd[1893]: time="2025-01-13T20:45:29.808109751Z" level=info msg="Start snapshots syncer" Jan 13 20:45:29.808130 containerd[1893]: time="2025-01-13T20:45:29.808124729Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:45:29.808209 containerd[1893]: time="2025-01-13T20:45:29.808135998Z" level=info msg="Start streaming server" Jan 13 20:45:29.808463 containerd[1893]: time="2025-01-13T20:45:29.808441193Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:45:29.808996 containerd[1893]: time="2025-01-13T20:45:29.808978812Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:45:29.809215 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:45:29.812585 containerd[1893]: time="2025-01-13T20:45:29.812364760Z" level=info msg="containerd successfully booted in 0.146499s" Jan 13 20:45:29.829489 systemd-networkd[1729]: eth0: Gained IPv6LL Jan 13 20:45:29.834309 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:45:29.837228 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:45:29.846692 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 13 20:45:29.857591 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:45:29.864124 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:45:29.963191 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:45:29.977846 amazon-ssm-agent[2070]: Initializing new seelog logger Jan 13 20:45:29.978419 amazon-ssm-agent[2070]: New Seelog Logger Creation Complete Jan 13 20:45:29.978603 amazon-ssm-agent[2070]: 2025/01/13 20:45:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:45:29.978653 amazon-ssm-agent[2070]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:45:29.979151 amazon-ssm-agent[2070]: 2025/01/13 20:45:29 processing appconfig overrides Jan 13 20:45:29.979741 amazon-ssm-agent[2070]: 2025/01/13 20:45:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:45:29.979741 amazon-ssm-agent[2070]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:45:29.979856 amazon-ssm-agent[2070]: 2025/01/13 20:45:29 processing appconfig overrides Jan 13 20:45:29.980183 amazon-ssm-agent[2070]: 2025/01/13 20:45:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:45:29.980183 amazon-ssm-agent[2070]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:45:29.980272 amazon-ssm-agent[2070]: 2025/01/13 20:45:29 processing appconfig overrides Jan 13 20:45:29.980705 amazon-ssm-agent[2070]: 2025-01-13 20:45:29 INFO Proxy environment variables: Jan 13 20:45:29.984780 amazon-ssm-agent[2070]: 2025/01/13 20:45:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:45:29.984780 amazon-ssm-agent[2070]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:45:29.984943 amazon-ssm-agent[2070]: 2025/01/13 20:45:29 processing appconfig overrides Jan 13 20:45:30.063493 tar[1878]: linux-amd64/LICENSE Jan 13 20:45:30.063900 tar[1878]: linux-amd64/README.md Jan 13 20:45:30.080297 amazon-ssm-agent[2070]: 2025-01-13 20:45:29 INFO https_proxy: Jan 13 20:45:30.080538 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:45:30.180641 amazon-ssm-agent[2070]: 2025-01-13 20:45:29 INFO http_proxy: Jan 13 20:45:30.280340 amazon-ssm-agent[2070]: 2025-01-13 20:45:29 INFO no_proxy: Jan 13 20:45:30.377555 amazon-ssm-agent[2070]: 2025-01-13 20:45:29 INFO Checking if agent identity type OnPrem can be assumed Jan 13 20:45:30.475695 amazon-ssm-agent[2070]: 2025-01-13 20:45:29 INFO Checking if agent identity type EC2 can be assumed Jan 13 20:45:30.575393 amazon-ssm-agent[2070]: 2025-01-13 20:45:30 INFO Agent will take identity from EC2 Jan 13 20:45:30.674457 amazon-ssm-agent[2070]: 2025-01-13 20:45:30 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:45:30.677237 amazon-ssm-agent[2070]: 2025-01-13 20:45:30 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:45:30.677424 amazon-ssm-agent[2070]: 2025-01-13 20:45:30 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:45:30.677510 amazon-ssm-agent[2070]: 2025-01-13 20:45:30 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 13 20:45:30.677589 amazon-ssm-agent[2070]: 2025-01-13 20:45:30 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 13 20:45:30.677663 amazon-ssm-agent[2070]: 2025-01-13 20:45:30 INFO [amazon-ssm-agent] Starting Core Agent Jan 13 20:45:30.677755 amazon-ssm-agent[2070]: 2025-01-13 20:45:30 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 13 20:45:30.677827 amazon-ssm-agent[2070]: 2025-01-13 20:45:30 INFO [Registrar] Starting registrar module Jan 13 20:45:30.677942 amazon-ssm-agent[2070]: 2025-01-13 20:45:30 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 13 20:45:30.678010 amazon-ssm-agent[2070]: 2025-01-13 20:45:30 INFO [EC2Identity] EC2 registration was successful. Jan 13 20:45:30.678108 amazon-ssm-agent[2070]: 2025-01-13 20:45:30 INFO [CredentialRefresher] credentialRefresher has started Jan 13 20:45:30.678108 amazon-ssm-agent[2070]: 2025-01-13 20:45:30 INFO [CredentialRefresher] Starting credentials refresher loop Jan 13 20:45:30.678108 amazon-ssm-agent[2070]: 2025-01-13 20:45:30 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 13 20:45:30.773759 amazon-ssm-agent[2070]: 2025-01-13 20:45:30 INFO [CredentialRefresher] Next credential rotation will be in 31.5166468774 minutes Jan 13 20:45:30.800873 sshd_keygen[1906]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:45:30.827125 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:45:30.837140 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:45:30.847807 systemd[1]: Started sshd@0-172.31.17.145:22-139.178.89.65:42722.service - OpenSSH per-connection server daemon (139.178.89.65:42722). Jan 13 20:45:30.858720 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:45:30.859001 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:45:30.863235 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:45:30.893427 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:45:30.900756 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:45:30.912542 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 20:45:30.915525 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:45:31.104310 sshd[2101]: Accepted publickey for core from 139.178.89.65 port 42722 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:45:31.111405 sshd-session[2101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:31.127001 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:45:31.135309 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:45:31.141344 systemd-logind[1872]: New session 1 of user core. Jan 13 20:45:31.171360 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:45:31.181103 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:45:31.187648 (systemd)[2112]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:45:31.337717 systemd[2112]: Queued start job for default target default.target. Jan 13 20:45:31.347635 systemd[2112]: Created slice app.slice - User Application Slice. Jan 13 20:45:31.347684 systemd[2112]: Reached target paths.target - Paths. Jan 13 20:45:31.347706 systemd[2112]: Reached target timers.target - Timers. Jan 13 20:45:31.349178 systemd[2112]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:45:31.364013 systemd[2112]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:45:31.364442 systemd[2112]: Reached target sockets.target - Sockets. Jan 13 20:45:31.364678 systemd[2112]: Reached target basic.target - Basic System. Jan 13 20:45:31.364869 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:45:31.366422 systemd[2112]: Reached target default.target - Main User Target. Jan 13 20:45:31.366490 systemd[2112]: Startup finished in 169ms. Jan 13 20:45:31.372579 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:45:31.526717 systemd[1]: Started sshd@1-172.31.17.145:22-139.178.89.65:37538.service - OpenSSH per-connection server daemon (139.178.89.65:37538). Jan 13 20:45:31.700043 sshd[2123]: Accepted publickey for core from 139.178.89.65 port 37538 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:45:31.701911 amazon-ssm-agent[2070]: 2025-01-13 20:45:31 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 13 20:45:31.706676 sshd-session[2123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:31.720717 systemd-logind[1872]: New session 2 of user core. Jan 13 20:45:31.731605 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:45:31.807699 amazon-ssm-agent[2070]: 2025-01-13 20:45:31 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2126) started Jan 13 20:45:31.868725 sshd[2127]: Connection closed by 139.178.89.65 port 37538 Jan 13 20:45:31.868595 sshd-session[2123]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:31.876381 systemd[1]: sshd@1-172.31.17.145:22-139.178.89.65:37538.service: Deactivated successfully. Jan 13 20:45:31.883575 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:45:31.885749 systemd-logind[1872]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:45:31.886952 systemd-logind[1872]: Removed session 2. Jan 13 20:45:31.905937 amazon-ssm-agent[2070]: 2025-01-13 20:45:31 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 13 20:45:31.916713 systemd[1]: Started sshd@2-172.31.17.145:22-139.178.89.65:37550.service - OpenSSH per-connection server daemon (139.178.89.65:37550). Jan 13 20:45:32.119750 sshd[2140]: Accepted publickey for core from 139.178.89.65 port 37550 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:45:32.121296 sshd-session[2140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:32.158783 systemd-logind[1872]: New session 3 of user core. Jan 13 20:45:32.173856 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:45:32.299350 sshd[2143]: Connection closed by 139.178.89.65 port 37550 Jan 13 20:45:32.299563 sshd-session[2140]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:32.310796 systemd-logind[1872]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:45:32.311173 systemd[1]: sshd@2-172.31.17.145:22-139.178.89.65:37550.service: Deactivated successfully. Jan 13 20:45:32.318945 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:45:32.324892 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:45:32.329193 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:45:32.333472 systemd[1]: Startup finished in 823ms (kernel) + 8.868s (initrd) + 8.807s (userspace) = 18.499s. Jan 13 20:45:32.338038 (kubelet)[2151]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:45:32.352450 systemd-logind[1872]: Removed session 3. Jan 13 20:45:32.604501 ntpd[1867]: Listen normally on 7 eth0 [fe80::45c:28ff:fe24:12af%2]:123 Jan 13 20:45:32.604909 ntpd[1867]: 13 Jan 20:45:32 ntpd[1867]: Listen normally on 7 eth0 [fe80::45c:28ff:fe24:12af%2]:123 Jan 13 20:45:34.600610 kubelet[2151]: E0113 20:45:34.600517 2151 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:45:34.603892 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:45:34.604123 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:45:34.604498 systemd[1]: kubelet.service: Consumed 1.092s CPU time. Jan 13 20:45:36.405943 systemd-resolved[1733]: Clock change detected. Flushing caches. Jan 13 20:45:43.182564 systemd[1]: Started sshd@3-172.31.17.145:22-139.178.89.65:47804.service - OpenSSH per-connection server daemon (139.178.89.65:47804). Jan 13 20:45:43.390531 sshd[2165]: Accepted publickey for core from 139.178.89.65 port 47804 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:45:43.392029 sshd-session[2165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:43.397251 systemd-logind[1872]: New session 4 of user core. Jan 13 20:45:43.403366 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:45:43.524951 sshd[2167]: Connection closed by 139.178.89.65 port 47804 Jan 13 20:45:43.526377 sshd-session[2165]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:43.529667 systemd[1]: sshd@3-172.31.17.145:22-139.178.89.65:47804.service: Deactivated successfully. Jan 13 20:45:43.531790 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:45:43.533309 systemd-logind[1872]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:45:43.534375 systemd-logind[1872]: Removed session 4. Jan 13 20:45:43.559666 systemd[1]: Started sshd@4-172.31.17.145:22-139.178.89.65:47816.service - OpenSSH per-connection server daemon (139.178.89.65:47816). Jan 13 20:45:43.725033 sshd[2172]: Accepted publickey for core from 139.178.89.65 port 47816 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:45:43.726959 sshd-session[2172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:43.735815 systemd-logind[1872]: New session 5 of user core. Jan 13 20:45:43.746383 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:45:43.861856 sshd[2174]: Connection closed by 139.178.89.65 port 47816 Jan 13 20:45:43.863186 sshd-session[2172]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:43.867524 systemd[1]: sshd@4-172.31.17.145:22-139.178.89.65:47816.service: Deactivated successfully. Jan 13 20:45:43.871334 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:45:43.873595 systemd-logind[1872]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:45:43.875971 systemd-logind[1872]: Removed session 5. Jan 13 20:45:43.898030 systemd[1]: Started sshd@5-172.31.17.145:22-139.178.89.65:47820.service - OpenSSH per-connection server daemon (139.178.89.65:47820). Jan 13 20:45:44.071285 sshd[2179]: Accepted publickey for core from 139.178.89.65 port 47820 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:45:44.073035 sshd-session[2179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:44.081063 systemd-logind[1872]: New session 6 of user core. Jan 13 20:45:44.091378 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:45:44.214691 sshd[2181]: Connection closed by 139.178.89.65 port 47820 Jan 13 20:45:44.220633 sshd-session[2179]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:44.228748 systemd[1]: sshd@5-172.31.17.145:22-139.178.89.65:47820.service: Deactivated successfully. Jan 13 20:45:44.230978 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:45:44.232198 systemd-logind[1872]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:45:44.233500 systemd-logind[1872]: Removed session 6. Jan 13 20:45:44.251541 systemd[1]: Started sshd@6-172.31.17.145:22-139.178.89.65:47830.service - OpenSSH per-connection server daemon (139.178.89.65:47830). Jan 13 20:45:44.418853 sshd[2186]: Accepted publickey for core from 139.178.89.65 port 47830 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:45:44.420561 sshd-session[2186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:44.426315 systemd-logind[1872]: New session 7 of user core. Jan 13 20:45:44.429379 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:45:44.600898 sudo[2189]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:45:44.601403 sudo[2189]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:45:44.630695 sudo[2189]: pam_unix(sudo:session): session closed for user root Jan 13 20:45:44.655100 sshd[2188]: Connection closed by 139.178.89.65 port 47830 Jan 13 20:45:44.656099 sshd-session[2186]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:44.692492 systemd[1]: sshd@6-172.31.17.145:22-139.178.89.65:47830.service: Deactivated successfully. Jan 13 20:45:44.695004 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:45:44.696563 systemd-logind[1872]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:45:44.704649 systemd[1]: Started sshd@7-172.31.17.145:22-139.178.89.65:47834.service - OpenSSH per-connection server daemon (139.178.89.65:47834). Jan 13 20:45:44.706439 systemd-logind[1872]: Removed session 7. Jan 13 20:45:44.874185 sshd[2194]: Accepted publickey for core from 139.178.89.65 port 47834 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:45:44.875990 sshd-session[2194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:44.883218 systemd-logind[1872]: New session 8 of user core. Jan 13 20:45:44.895442 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:45:44.998333 sudo[2198]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:45:44.998723 sudo[2198]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:45:45.002697 sudo[2198]: pam_unix(sudo:session): session closed for user root Jan 13 20:45:45.008578 sudo[2197]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:45:45.008970 sudo[2197]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:45:45.031672 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:45:45.065166 augenrules[2220]: No rules Jan 13 20:45:45.066961 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:45:45.067367 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:45:45.071188 sudo[2197]: pam_unix(sudo:session): session closed for user root Jan 13 20:45:45.094686 sshd[2196]: Connection closed by 139.178.89.65 port 47834 Jan 13 20:45:45.096362 sshd-session[2194]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:45.103325 systemd[1]: sshd@7-172.31.17.145:22-139.178.89.65:47834.service: Deactivated successfully. Jan 13 20:45:45.105333 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:45:45.106922 systemd-logind[1872]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:45:45.108373 systemd-logind[1872]: Removed session 8. Jan 13 20:45:45.131689 systemd[1]: Started sshd@8-172.31.17.145:22-139.178.89.65:47844.service - OpenSSH per-connection server daemon (139.178.89.65:47844). Jan 13 20:45:45.310740 sshd[2228]: Accepted publickey for core from 139.178.89.65 port 47844 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:45:45.312182 sshd-session[2228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:45:45.317041 systemd-logind[1872]: New session 9 of user core. Jan 13 20:45:45.323374 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:45:45.422976 sudo[2231]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:45:45.423385 sudo[2231]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:45:45.424588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:45:45.431407 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:45:45.725439 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:45:45.741642 (kubelet)[2250]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:45:45.823007 kubelet[2250]: E0113 20:45:45.822953 2250 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:45:45.829751 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:45:45.829980 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:45:46.337542 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:45:46.340708 (dockerd)[2267]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:45:46.946473 dockerd[2267]: time="2025-01-13T20:45:46.946410207Z" level=info msg="Starting up" Jan 13 20:45:47.108951 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1074114510-merged.mount: Deactivated successfully. Jan 13 20:45:47.158058 dockerd[2267]: time="2025-01-13T20:45:47.158008009Z" level=info msg="Loading containers: start." Jan 13 20:45:47.425458 kernel: Initializing XFRM netlink socket Jan 13 20:45:47.455444 (udev-worker)[2288]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:45:47.532542 systemd-networkd[1729]: docker0: Link UP Jan 13 20:45:47.553576 dockerd[2267]: time="2025-01-13T20:45:47.553514947Z" level=info msg="Loading containers: done." Jan 13 20:45:47.578433 dockerd[2267]: time="2025-01-13T20:45:47.578386462Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:45:47.578705 dockerd[2267]: time="2025-01-13T20:45:47.578507819Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 13 20:45:47.578762 dockerd[2267]: time="2025-01-13T20:45:47.578744872Z" level=info msg="Daemon has completed initialization" Jan 13 20:45:47.619414 dockerd[2267]: time="2025-01-13T20:45:47.617992013Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:45:47.619819 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:45:48.104098 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3862742977-merged.mount: Deactivated successfully. Jan 13 20:45:49.173485 containerd[1893]: time="2025-01-13T20:45:49.173386302Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 13 20:45:49.894090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2939203719.mount: Deactivated successfully. Jan 13 20:45:51.534625 containerd[1893]: time="2025-01-13T20:45:51.534574013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:51.536060 containerd[1893]: time="2025-01-13T20:45:51.536019924Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139254" Jan 13 20:45:51.538990 containerd[1893]: time="2025-01-13T20:45:51.537921671Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:51.542440 containerd[1893]: time="2025-01-13T20:45:51.542379564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:51.543768 containerd[1893]: time="2025-01-13T20:45:51.543724576Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 2.370279616s" Jan 13 20:45:51.543768 containerd[1893]: time="2025-01-13T20:45:51.543765355Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Jan 13 20:45:51.571303 containerd[1893]: time="2025-01-13T20:45:51.571270081Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 13 20:45:53.777871 containerd[1893]: time="2025-01-13T20:45:53.777818775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:53.779168 containerd[1893]: time="2025-01-13T20:45:53.779098441Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217732" Jan 13 20:45:53.780443 containerd[1893]: time="2025-01-13T20:45:53.780388948Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:53.784448 containerd[1893]: time="2025-01-13T20:45:53.784405779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:53.785751 containerd[1893]: time="2025-01-13T20:45:53.785611329Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 2.214166386s" Jan 13 20:45:53.785751 containerd[1893]: time="2025-01-13T20:45:53.785650019Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Jan 13 20:45:53.811527 containerd[1893]: time="2025-01-13T20:45:53.811486153Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 13 20:45:55.041858 containerd[1893]: time="2025-01-13T20:45:55.041502712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:55.044533 containerd[1893]: time="2025-01-13T20:45:55.044477284Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332822" Jan 13 20:45:55.046183 containerd[1893]: time="2025-01-13T20:45:55.045979727Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:55.050867 containerd[1893]: time="2025-01-13T20:45:55.050799534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:55.052354 containerd[1893]: time="2025-01-13T20:45:55.051971761Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.240441141s" Jan 13 20:45:55.052354 containerd[1893]: time="2025-01-13T20:45:55.052015630Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Jan 13 20:45:55.077618 containerd[1893]: time="2025-01-13T20:45:55.077569716Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 20:45:55.854792 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:45:55.861654 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:45:56.089442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:45:56.091976 (kubelet)[2544]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:45:56.175965 kubelet[2544]: E0113 20:45:56.175834 2544 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:45:56.180300 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:45:56.180498 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:45:56.410537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3991560339.mount: Deactivated successfully. Jan 13 20:45:56.943274 containerd[1893]: time="2025-01-13T20:45:56.943168760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:56.944800 containerd[1893]: time="2025-01-13T20:45:56.944637107Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Jan 13 20:45:56.947747 containerd[1893]: time="2025-01-13T20:45:56.946327429Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:56.949386 containerd[1893]: time="2025-01-13T20:45:56.949346721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:56.950128 containerd[1893]: time="2025-01-13T20:45:56.950092237Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.872482326s" Jan 13 20:45:56.950304 containerd[1893]: time="2025-01-13T20:45:56.950282634Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Jan 13 20:45:56.986939 containerd[1893]: time="2025-01-13T20:45:56.986751070Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:45:57.725460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2928235863.mount: Deactivated successfully. Jan 13 20:45:58.787609 containerd[1893]: time="2025-01-13T20:45:58.787549046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:58.789035 containerd[1893]: time="2025-01-13T20:45:58.788807372Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 13 20:45:58.791173 containerd[1893]: time="2025-01-13T20:45:58.790242427Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:58.794577 containerd[1893]: time="2025-01-13T20:45:58.793245966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:58.794577 containerd[1893]: time="2025-01-13T20:45:58.794407220Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.806925674s" Jan 13 20:45:58.794577 containerd[1893]: time="2025-01-13T20:45:58.794443893Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 20:45:58.821405 containerd[1893]: time="2025-01-13T20:45:58.821366139Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 20:45:59.363109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount401637412.mount: Deactivated successfully. Jan 13 20:45:59.376655 containerd[1893]: time="2025-01-13T20:45:59.376603218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:59.379771 containerd[1893]: time="2025-01-13T20:45:59.379683630Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 13 20:45:59.383867 containerd[1893]: time="2025-01-13T20:45:59.383786516Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:59.394467 containerd[1893]: time="2025-01-13T20:45:59.394239571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:59.397416 containerd[1893]: time="2025-01-13T20:45:59.396591324Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 575.018096ms" Jan 13 20:45:59.397416 containerd[1893]: time="2025-01-13T20:45:59.396638007Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 13 20:45:59.441791 containerd[1893]: time="2025-01-13T20:45:59.441742306Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 13 20:46:00.083414 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 13 20:46:00.097205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3576439630.mount: Deactivated successfully. Jan 13 20:46:03.732505 containerd[1893]: time="2025-01-13T20:46:03.732454895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:03.734348 containerd[1893]: time="2025-01-13T20:46:03.734283483Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jan 13 20:46:03.736845 containerd[1893]: time="2025-01-13T20:46:03.736565372Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:03.740896 containerd[1893]: time="2025-01-13T20:46:03.740829946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:03.743065 containerd[1893]: time="2025-01-13T20:46:03.742832818Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 4.30104162s" Jan 13 20:46:03.743065 containerd[1893]: time="2025-01-13T20:46:03.742886204Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 13 20:46:06.273536 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 20:46:06.284285 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:46:06.583392 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:46:06.603079 (kubelet)[2731]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:46:06.702061 kubelet[2731]: E0113 20:46:06.701990 2731 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:46:06.705238 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:46:06.705579 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:46:07.037833 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:46:07.049582 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:46:07.087393 systemd[1]: Reloading requested from client PID 2747 ('systemctl') (unit session-9.scope)... Jan 13 20:46:07.087586 systemd[1]: Reloading... Jan 13 20:46:07.253180 zram_generator::config[2790]: No configuration found. Jan 13 20:46:07.446544 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:46:07.539786 systemd[1]: Reloading finished in 451 ms. Jan 13 20:46:07.602433 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:46:07.602551 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:46:07.602933 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:46:07.608665 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:46:07.843557 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:46:07.854837 (kubelet)[2847]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:46:07.951918 kubelet[2847]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:46:07.952274 kubelet[2847]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:46:07.952274 kubelet[2847]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:46:07.954041 kubelet[2847]: I0113 20:46:07.953970 2847 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:46:08.411042 kubelet[2847]: I0113 20:46:08.410991 2847 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:46:08.411042 kubelet[2847]: I0113 20:46:08.411050 2847 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:46:08.411477 kubelet[2847]: I0113 20:46:08.411451 2847 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:46:08.527310 kubelet[2847]: I0113 20:46:08.527167 2847 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:46:08.534654 kubelet[2847]: E0113 20:46:08.534529 2847 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.17.145:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.17.145:6443: connect: connection refused Jan 13 20:46:08.552461 kubelet[2847]: I0113 20:46:08.552309 2847 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:46:08.555605 kubelet[2847]: I0113 20:46:08.555506 2847 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:46:08.558864 kubelet[2847]: I0113 20:46:08.558799 2847 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:46:08.562061 kubelet[2847]: I0113 20:46:08.561921 2847 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:46:08.562061 kubelet[2847]: I0113 20:46:08.562061 2847 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:46:08.562347 kubelet[2847]: I0113 20:46:08.562320 2847 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:46:08.564230 kubelet[2847]: W0113 20:46:08.564173 2847 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.17.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-145&limit=500&resourceVersion=0": dial tcp 172.31.17.145:6443: connect: connection refused Jan 13 20:46:08.564506 kubelet[2847]: E0113 20:46:08.564236 2847 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.17.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-145&limit=500&resourceVersion=0": dial tcp 172.31.17.145:6443: connect: connection refused Jan 13 20:46:08.565841 kubelet[2847]: I0113 20:46:08.565802 2847 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:46:08.565921 kubelet[2847]: I0113 20:46:08.565849 2847 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:46:08.565921 kubelet[2847]: I0113 20:46:08.565903 2847 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:46:08.566114 kubelet[2847]: I0113 20:46:08.566021 2847 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:46:08.570169 kubelet[2847]: W0113 20:46:08.568198 2847 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.17.145:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.17.145:6443: connect: connection refused Jan 13 20:46:08.570169 kubelet[2847]: E0113 20:46:08.568256 2847 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.17.145:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.17.145:6443: connect: connection refused Jan 13 20:46:08.570169 kubelet[2847]: I0113 20:46:08.568825 2847 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:46:08.576896 kubelet[2847]: I0113 20:46:08.576843 2847 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:46:08.577043 kubelet[2847]: W0113 20:46:08.576947 2847 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:46:08.578209 kubelet[2847]: I0113 20:46:08.577775 2847 server.go:1256] "Started kubelet" Jan 13 20:46:08.579697 kubelet[2847]: I0113 20:46:08.579051 2847 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:46:08.587926 kubelet[2847]: I0113 20:46:08.587888 2847 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:46:08.589593 kubelet[2847]: I0113 20:46:08.589163 2847 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:46:08.595165 kubelet[2847]: I0113 20:46:08.594348 2847 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:46:08.595165 kubelet[2847]: I0113 20:46:08.594586 2847 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:46:08.596599 kubelet[2847]: E0113 20:46:08.596575 2847 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.145:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.145:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-145.181a5b75fa8f4cb5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-145,UID:ip-172-31-17-145,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-145,},FirstTimestamp:2025-01-13 20:46:08.577744053 +0000 UTC m=+0.713076539,LastTimestamp:2025-01-13 20:46:08.577744053 +0000 UTC m=+0.713076539,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-145,}" Jan 13 20:46:08.596894 kubelet[2847]: I0113 20:46:08.596736 2847 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:46:08.597095 kubelet[2847]: I0113 20:46:08.596759 2847 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:46:08.598263 kubelet[2847]: I0113 20:46:08.598238 2847 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:46:08.598884 kubelet[2847]: E0113 20:46:08.598861 2847 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-145?timeout=10s\": dial tcp 172.31.17.145:6443: connect: connection refused" interval="200ms" Jan 13 20:46:08.599360 kubelet[2847]: W0113 20:46:08.599308 2847 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.17.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.145:6443: connect: connection refused Jan 13 20:46:08.599432 kubelet[2847]: E0113 20:46:08.599369 2847 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.17.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.145:6443: connect: connection refused Jan 13 20:46:08.602961 kubelet[2847]: I0113 20:46:08.602939 2847 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:46:08.603469 kubelet[2847]: I0113 20:46:08.603198 2847 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:46:08.607775 kubelet[2847]: I0113 20:46:08.606955 2847 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:46:08.619118 kubelet[2847]: I0113 20:46:08.619068 2847 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:46:08.621892 kubelet[2847]: I0113 20:46:08.621863 2847 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:46:08.622817 kubelet[2847]: I0113 20:46:08.622021 2847 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:46:08.622817 kubelet[2847]: I0113 20:46:08.622048 2847 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:46:08.622817 kubelet[2847]: E0113 20:46:08.622100 2847 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:46:08.630249 kubelet[2847]: W0113 20:46:08.630119 2847 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.17.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.145:6443: connect: connection refused Jan 13 20:46:08.630249 kubelet[2847]: E0113 20:46:08.630201 2847 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.17.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.145:6443: connect: connection refused Jan 13 20:46:08.639256 kubelet[2847]: E0113 20:46:08.638783 2847 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:46:08.646526 kubelet[2847]: I0113 20:46:08.646454 2847 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:46:08.646526 kubelet[2847]: I0113 20:46:08.646523 2847 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:46:08.646819 kubelet[2847]: I0113 20:46:08.646551 2847 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:46:08.656416 kubelet[2847]: I0113 20:46:08.656357 2847 policy_none.go:49] "None policy: Start" Jan 13 20:46:08.657973 kubelet[2847]: I0113 20:46:08.657671 2847 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:46:08.657973 kubelet[2847]: I0113 20:46:08.657701 2847 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:46:08.670946 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:46:08.688276 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:46:08.700768 kubelet[2847]: I0113 20:46:08.700735 2847 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-145" Jan 13 20:46:08.701164 kubelet[2847]: E0113 20:46:08.701122 2847 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.17.145:6443/api/v1/nodes\": dial tcp 172.31.17.145:6443: connect: connection refused" node="ip-172-31-17-145" Jan 13 20:46:08.705092 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:46:08.706988 kubelet[2847]: I0113 20:46:08.706958 2847 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:46:08.707340 kubelet[2847]: I0113 20:46:08.707311 2847 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:46:08.710735 kubelet[2847]: E0113 20:46:08.710666 2847 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-17-145\" not found" Jan 13 20:46:08.723294 kubelet[2847]: I0113 20:46:08.722986 2847 topology_manager.go:215] "Topology Admit Handler" podUID="410792bb940348ca24c82202f21fa371" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-17-145" Jan 13 20:46:08.724958 kubelet[2847]: I0113 20:46:08.724929 2847 topology_manager.go:215] "Topology Admit Handler" podUID="8d3ef95cc011e3c14d21c64e53b569af" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-17-145" Jan 13 20:46:08.730591 kubelet[2847]: I0113 20:46:08.730566 2847 topology_manager.go:215] "Topology Admit Handler" podUID="b682b8e1b59c958527b5f20b9fd8a476" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-17-145" Jan 13 20:46:08.743276 systemd[1]: Created slice kubepods-burstable-pod410792bb940348ca24c82202f21fa371.slice - libcontainer container kubepods-burstable-pod410792bb940348ca24c82202f21fa371.slice. Jan 13 20:46:08.772686 systemd[1]: Created slice kubepods-burstable-pod8d3ef95cc011e3c14d21c64e53b569af.slice - libcontainer container kubepods-burstable-pod8d3ef95cc011e3c14d21c64e53b569af.slice. Jan 13 20:46:08.790955 systemd[1]: Created slice kubepods-burstable-podb682b8e1b59c958527b5f20b9fd8a476.slice - libcontainer container kubepods-burstable-podb682b8e1b59c958527b5f20b9fd8a476.slice. Jan 13 20:46:08.799431 kubelet[2847]: I0113 20:46:08.799394 2847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8d3ef95cc011e3c14d21c64e53b569af-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-145\" (UID: \"8d3ef95cc011e3c14d21c64e53b569af\") " pod="kube-system/kube-controller-manager-ip-172-31-17-145" Jan 13 20:46:08.799608 kubelet[2847]: I0113 20:46:08.799448 2847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/410792bb940348ca24c82202f21fa371-ca-certs\") pod \"kube-apiserver-ip-172-31-17-145\" (UID: \"410792bb940348ca24c82202f21fa371\") " pod="kube-system/kube-apiserver-ip-172-31-17-145" Jan 13 20:46:08.799608 kubelet[2847]: I0113 20:46:08.799478 2847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/410792bb940348ca24c82202f21fa371-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-145\" (UID: \"410792bb940348ca24c82202f21fa371\") " pod="kube-system/kube-apiserver-ip-172-31-17-145" Jan 13 20:46:08.799608 kubelet[2847]: I0113 20:46:08.799504 2847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/410792bb940348ca24c82202f21fa371-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-145\" (UID: \"410792bb940348ca24c82202f21fa371\") " pod="kube-system/kube-apiserver-ip-172-31-17-145" Jan 13 20:46:08.799608 kubelet[2847]: I0113 20:46:08.799530 2847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8d3ef95cc011e3c14d21c64e53b569af-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-145\" (UID: \"8d3ef95cc011e3c14d21c64e53b569af\") " pod="kube-system/kube-controller-manager-ip-172-31-17-145" Jan 13 20:46:08.799608 kubelet[2847]: I0113 20:46:08.799557 2847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8d3ef95cc011e3c14d21c64e53b569af-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-145\" (UID: \"8d3ef95cc011e3c14d21c64e53b569af\") " pod="kube-system/kube-controller-manager-ip-172-31-17-145" Jan 13 20:46:08.799824 kubelet[2847]: I0113 20:46:08.799586 2847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d3ef95cc011e3c14d21c64e53b569af-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-145\" (UID: \"8d3ef95cc011e3c14d21c64e53b569af\") " pod="kube-system/kube-controller-manager-ip-172-31-17-145" Jan 13 20:46:08.799824 kubelet[2847]: I0113 20:46:08.799617 2847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8d3ef95cc011e3c14d21c64e53b569af-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-145\" (UID: \"8d3ef95cc011e3c14d21c64e53b569af\") " pod="kube-system/kube-controller-manager-ip-172-31-17-145" Jan 13 20:46:08.799824 kubelet[2847]: I0113 20:46:08.799651 2847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b682b8e1b59c958527b5f20b9fd8a476-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-145\" (UID: \"b682b8e1b59c958527b5f20b9fd8a476\") " pod="kube-system/kube-scheduler-ip-172-31-17-145" Jan 13 20:46:08.800292 kubelet[2847]: E0113 20:46:08.800261 2847 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-145?timeout=10s\": dial tcp 172.31.17.145:6443: connect: connection refused" interval="400ms" Jan 13 20:46:08.906500 kubelet[2847]: I0113 20:46:08.906414 2847 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-145" Jan 13 20:46:08.907559 kubelet[2847]: E0113 20:46:08.907529 2847 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.17.145:6443/api/v1/nodes\": dial tcp 172.31.17.145:6443: connect: connection refused" node="ip-172-31-17-145" Jan 13 20:46:09.066361 containerd[1893]: time="2025-01-13T20:46:09.066312317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-145,Uid:410792bb940348ca24c82202f21fa371,Namespace:kube-system,Attempt:0,}" Jan 13 20:46:09.087309 containerd[1893]: time="2025-01-13T20:46:09.087260404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-145,Uid:8d3ef95cc011e3c14d21c64e53b569af,Namespace:kube-system,Attempt:0,}" Jan 13 20:46:09.094387 containerd[1893]: time="2025-01-13T20:46:09.094341362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-145,Uid:b682b8e1b59c958527b5f20b9fd8a476,Namespace:kube-system,Attempt:0,}" Jan 13 20:46:09.201424 kubelet[2847]: E0113 20:46:09.201375 2847 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-145?timeout=10s\": dial tcp 172.31.17.145:6443: connect: connection refused" interval="800ms" Jan 13 20:46:09.310131 kubelet[2847]: I0113 20:46:09.309865 2847 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-145" Jan 13 20:46:09.310298 kubelet[2847]: E0113 20:46:09.310217 2847 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.17.145:6443/api/v1/nodes\": dial tcp 172.31.17.145:6443: connect: connection refused" node="ip-172-31-17-145" Jan 13 20:46:09.517684 kubelet[2847]: W0113 20:46:09.517547 2847 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.17.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-145&limit=500&resourceVersion=0": dial tcp 172.31.17.145:6443: connect: connection refused Jan 13 20:46:09.517684 kubelet[2847]: E0113 20:46:09.517615 2847 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.17.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-145&limit=500&resourceVersion=0": dial tcp 172.31.17.145:6443: connect: connection refused Jan 13 20:46:09.623610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount627301218.mount: Deactivated successfully. Jan 13 20:46:09.639995 containerd[1893]: time="2025-01-13T20:46:09.639935827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:46:09.644550 containerd[1893]: time="2025-01-13T20:46:09.644501798Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:46:09.650267 containerd[1893]: time="2025-01-13T20:46:09.650080257Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 20:46:09.654097 containerd[1893]: time="2025-01-13T20:46:09.654005225Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:46:09.659180 containerd[1893]: time="2025-01-13T20:46:09.659114487Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:46:09.663806 containerd[1893]: time="2025-01-13T20:46:09.662080081Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:46:09.663806 containerd[1893]: time="2025-01-13T20:46:09.662479039Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:46:09.667743 containerd[1893]: time="2025-01-13T20:46:09.667680115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:46:09.668621 containerd[1893]: time="2025-01-13T20:46:09.668576946Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 581.200964ms" Jan 13 20:46:09.670500 containerd[1893]: time="2025-01-13T20:46:09.670464364Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 604.032755ms" Jan 13 20:46:09.672502 containerd[1893]: time="2025-01-13T20:46:09.672459572Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 578.010033ms" Jan 13 20:46:09.874564 kubelet[2847]: W0113 20:46:09.874394 2847 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.17.145:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.17.145:6443: connect: connection refused Jan 13 20:46:09.874852 kubelet[2847]: E0113 20:46:09.874592 2847 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.17.145:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.17.145:6443: connect: connection refused Jan 13 20:46:10.002328 kubelet[2847]: E0113 20:46:10.002296 2847 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-145?timeout=10s\": dial tcp 172.31.17.145:6443: connect: connection refused" interval="1.6s" Jan 13 20:46:10.008081 containerd[1893]: time="2025-01-13T20:46:10.003187960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:46:10.008081 containerd[1893]: time="2025-01-13T20:46:10.003254434Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:46:10.008081 containerd[1893]: time="2025-01-13T20:46:10.003277724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:10.008081 containerd[1893]: time="2025-01-13T20:46:10.003545193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:10.008081 containerd[1893]: time="2025-01-13T20:46:09.996006231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:46:10.008081 containerd[1893]: time="2025-01-13T20:46:10.001494483Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:46:10.008081 containerd[1893]: time="2025-01-13T20:46:10.001541686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:10.008081 containerd[1893]: time="2025-01-13T20:46:10.001711520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:10.014852 containerd[1893]: time="2025-01-13T20:46:10.014520406Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:46:10.014998 containerd[1893]: time="2025-01-13T20:46:10.014890193Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:46:10.014998 containerd[1893]: time="2025-01-13T20:46:10.014958129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:10.015258 containerd[1893]: time="2025-01-13T20:46:10.015189208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:10.049035 systemd[1]: Started cri-containerd-577c204a14bf3c8ed3ac5695daf2758a298ed3d41cdcb1aa62531c5a0c64c04b.scope - libcontainer container 577c204a14bf3c8ed3ac5695daf2758a298ed3d41cdcb1aa62531c5a0c64c04b. Jan 13 20:46:10.063414 systemd[1]: Started cri-containerd-f441669f4476d202f95701f8137298aafd3b56669e8a4acc3f64d60c146ac937.scope - libcontainer container f441669f4476d202f95701f8137298aafd3b56669e8a4acc3f64d60c146ac937. Jan 13 20:46:10.069374 systemd[1]: Started cri-containerd-c3474f131ca19197e6962ccc6aea5593a70f2c1d33267bff2c860b5a827fe95a.scope - libcontainer container c3474f131ca19197e6962ccc6aea5593a70f2c1d33267bff2c860b5a827fe95a. Jan 13 20:46:10.080442 kubelet[2847]: W0113 20:46:10.080372 2847 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.17.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.145:6443: connect: connection refused Jan 13 20:46:10.080442 kubelet[2847]: E0113 20:46:10.080439 2847 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.17.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.145:6443: connect: connection refused Jan 13 20:46:10.095119 kubelet[2847]: W0113 20:46:10.095008 2847 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.17.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.145:6443: connect: connection refused Jan 13 20:46:10.095119 kubelet[2847]: E0113 20:46:10.095084 2847 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.17.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.145:6443: connect: connection refused Jan 13 20:46:10.114906 kubelet[2847]: I0113 20:46:10.114518 2847 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-145" Jan 13 20:46:10.114906 kubelet[2847]: E0113 20:46:10.114881 2847 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.17.145:6443/api/v1/nodes\": dial tcp 172.31.17.145:6443: connect: connection refused" node="ip-172-31-17-145" Jan 13 20:46:10.157536 containerd[1893]: time="2025-01-13T20:46:10.156369608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-145,Uid:410792bb940348ca24c82202f21fa371,Namespace:kube-system,Attempt:0,} returns sandbox id \"577c204a14bf3c8ed3ac5695daf2758a298ed3d41cdcb1aa62531c5a0c64c04b\"" Jan 13 20:46:10.170872 containerd[1893]: time="2025-01-13T20:46:10.170783132Z" level=info msg="CreateContainer within sandbox \"577c204a14bf3c8ed3ac5695daf2758a298ed3d41cdcb1aa62531c5a0c64c04b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:46:10.192110 containerd[1893]: time="2025-01-13T20:46:10.192070762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-145,Uid:b682b8e1b59c958527b5f20b9fd8a476,Namespace:kube-system,Attempt:0,} returns sandbox id \"f441669f4476d202f95701f8137298aafd3b56669e8a4acc3f64d60c146ac937\"" Jan 13 20:46:10.195535 containerd[1893]: time="2025-01-13T20:46:10.195432681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-145,Uid:8d3ef95cc011e3c14d21c64e53b569af,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3474f131ca19197e6962ccc6aea5593a70f2c1d33267bff2c860b5a827fe95a\"" Jan 13 20:46:10.198021 containerd[1893]: time="2025-01-13T20:46:10.197982329Z" level=info msg="CreateContainer within sandbox \"f441669f4476d202f95701f8137298aafd3b56669e8a4acc3f64d60c146ac937\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:46:10.220776 containerd[1893]: time="2025-01-13T20:46:10.220719486Z" level=info msg="CreateContainer within sandbox \"c3474f131ca19197e6962ccc6aea5593a70f2c1d33267bff2c860b5a827fe95a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:46:10.261805 containerd[1893]: time="2025-01-13T20:46:10.261747500Z" level=info msg="CreateContainer within sandbox \"577c204a14bf3c8ed3ac5695daf2758a298ed3d41cdcb1aa62531c5a0c64c04b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7d68bd504bb6a100775dc6b6c9126304780e7dac9007300f095376e67ad355bc\"" Jan 13 20:46:10.264576 containerd[1893]: time="2025-01-13T20:46:10.263879040Z" level=info msg="StartContainer for \"7d68bd504bb6a100775dc6b6c9126304780e7dac9007300f095376e67ad355bc\"" Jan 13 20:46:10.271990 containerd[1893]: time="2025-01-13T20:46:10.271854063Z" level=info msg="CreateContainer within sandbox \"f441669f4476d202f95701f8137298aafd3b56669e8a4acc3f64d60c146ac937\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"398c14a53b667423d476c9a29397d32ae3efdc1c760a2d6ca62e694dc66c191f\"" Jan 13 20:46:10.273723 containerd[1893]: time="2025-01-13T20:46:10.272523172Z" level=info msg="StartContainer for \"398c14a53b667423d476c9a29397d32ae3efdc1c760a2d6ca62e694dc66c191f\"" Jan 13 20:46:10.281903 containerd[1893]: time="2025-01-13T20:46:10.281858220Z" level=info msg="CreateContainer within sandbox \"c3474f131ca19197e6962ccc6aea5593a70f2c1d33267bff2c860b5a827fe95a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9f289ed9a522ce1cd40fd721adc9191020049f3edba93110a2d243e6e2043e25\"" Jan 13 20:46:10.283327 containerd[1893]: time="2025-01-13T20:46:10.283295284Z" level=info msg="StartContainer for \"9f289ed9a522ce1cd40fd721adc9191020049f3edba93110a2d243e6e2043e25\"" Jan 13 20:46:10.318103 systemd[1]: Started cri-containerd-7d68bd504bb6a100775dc6b6c9126304780e7dac9007300f095376e67ad355bc.scope - libcontainer container 7d68bd504bb6a100775dc6b6c9126304780e7dac9007300f095376e67ad355bc. Jan 13 20:46:10.338381 systemd[1]: Started cri-containerd-398c14a53b667423d476c9a29397d32ae3efdc1c760a2d6ca62e694dc66c191f.scope - libcontainer container 398c14a53b667423d476c9a29397d32ae3efdc1c760a2d6ca62e694dc66c191f. Jan 13 20:46:10.363476 systemd[1]: Started cri-containerd-9f289ed9a522ce1cd40fd721adc9191020049f3edba93110a2d243e6e2043e25.scope - libcontainer container 9f289ed9a522ce1cd40fd721adc9191020049f3edba93110a2d243e6e2043e25. Jan 13 20:46:10.490912 containerd[1893]: time="2025-01-13T20:46:10.490781000Z" level=info msg="StartContainer for \"7d68bd504bb6a100775dc6b6c9126304780e7dac9007300f095376e67ad355bc\" returns successfully" Jan 13 20:46:10.517942 containerd[1893]: time="2025-01-13T20:46:10.517897635Z" level=info msg="StartContainer for \"9f289ed9a522ce1cd40fd721adc9191020049f3edba93110a2d243e6e2043e25\" returns successfully" Jan 13 20:46:10.569388 containerd[1893]: time="2025-01-13T20:46:10.569327489Z" level=info msg="StartContainer for \"398c14a53b667423d476c9a29397d32ae3efdc1c760a2d6ca62e694dc66c191f\" returns successfully" Jan 13 20:46:10.588046 kubelet[2847]: E0113 20:46:10.588011 2847 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.17.145:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.17.145:6443: connect: connection refused Jan 13 20:46:11.603500 kubelet[2847]: E0113 20:46:11.603460 2847 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-145?timeout=10s\": dial tcp 172.31.17.145:6443: connect: connection refused" interval="3.2s" Jan 13 20:46:11.718224 kubelet[2847]: I0113 20:46:11.716728 2847 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-145" Jan 13 20:46:11.718224 kubelet[2847]: E0113 20:46:11.717076 2847 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.17.145:6443/api/v1/nodes\": dial tcp 172.31.17.145:6443: connect: connection refused" node="ip-172-31-17-145" Jan 13 20:46:13.568655 kubelet[2847]: I0113 20:46:13.568544 2847 apiserver.go:52] "Watching apiserver" Jan 13 20:46:13.599555 kubelet[2847]: I0113 20:46:13.599507 2847 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:46:13.619969 kubelet[2847]: E0113 20:46:13.619913 2847 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-17-145" not found Jan 13 20:46:13.974103 kubelet[2847]: E0113 20:46:13.973989 2847 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-17-145" not found Jan 13 20:46:14.440430 kubelet[2847]: E0113 20:46:14.440396 2847 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-17-145" not found Jan 13 20:46:14.641595 update_engine[1873]: I20250113 20:46:14.641520 1873 update_attempter.cc:509] Updating boot flags... Jan 13 20:46:14.764639 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3134) Jan 13 20:46:14.818052 kubelet[2847]: E0113 20:46:14.816393 2847 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-17-145\" not found" node="ip-172-31-17-145" Jan 13 20:46:14.924749 kubelet[2847]: I0113 20:46:14.924707 2847 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-145" Jan 13 20:46:14.966181 kubelet[2847]: I0113 20:46:14.965803 2847 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-17-145" Jan 13 20:46:15.056726 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3134) Jan 13 20:46:16.676733 systemd[1]: Reloading requested from client PID 3303 ('systemctl') (unit session-9.scope)... Jan 13 20:46:16.676752 systemd[1]: Reloading... Jan 13 20:46:16.822259 zram_generator::config[3349]: No configuration found. Jan 13 20:46:16.951672 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:46:17.058991 systemd[1]: Reloading finished in 381 ms. Jan 13 20:46:17.113317 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:46:17.122773 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:46:17.123122 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:46:17.128549 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:46:17.363303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:46:17.375647 (kubelet)[3400]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:46:17.484761 kubelet[3400]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:46:17.485077 kubelet[3400]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:46:17.485117 kubelet[3400]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:46:17.486187 kubelet[3400]: I0113 20:46:17.486118 3400 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:46:17.503316 kubelet[3400]: I0113 20:46:17.503280 3400 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:46:17.503484 kubelet[3400]: I0113 20:46:17.503460 3400 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:46:17.504019 kubelet[3400]: I0113 20:46:17.503916 3400 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:46:17.506754 kubelet[3400]: I0113 20:46:17.506514 3400 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:46:17.510125 kubelet[3400]: I0113 20:46:17.509561 3400 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:46:17.518357 kubelet[3400]: I0113 20:46:17.518278 3400 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:46:17.518692 kubelet[3400]: I0113 20:46:17.518554 3400 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:46:17.518820 kubelet[3400]: I0113 20:46:17.518794 3400 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:46:17.518951 kubelet[3400]: I0113 20:46:17.518830 3400 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:46:17.518951 kubelet[3400]: I0113 20:46:17.518844 3400 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:46:17.518951 kubelet[3400]: I0113 20:46:17.518886 3400 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:46:17.519103 kubelet[3400]: I0113 20:46:17.518997 3400 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:46:17.519103 kubelet[3400]: I0113 20:46:17.519015 3400 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:46:17.519103 kubelet[3400]: I0113 20:46:17.519048 3400 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:46:17.519103 kubelet[3400]: I0113 20:46:17.519069 3400 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:46:17.523051 kubelet[3400]: I0113 20:46:17.522844 3400 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:46:17.525407 kubelet[3400]: I0113 20:46:17.525380 3400 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:46:17.529431 kubelet[3400]: I0113 20:46:17.528881 3400 server.go:1256] "Started kubelet" Jan 13 20:46:17.531739 kubelet[3400]: I0113 20:46:17.531714 3400 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:46:17.548132 sudo[3413]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 13 20:46:17.548590 sudo[3413]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 13 20:46:17.554927 kubelet[3400]: I0113 20:46:17.554891 3400 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:46:17.557224 kubelet[3400]: I0113 20:46:17.557051 3400 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:46:17.567767 kubelet[3400]: I0113 20:46:17.562269 3400 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:46:17.567767 kubelet[3400]: I0113 20:46:17.563442 3400 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:46:17.578900 kubelet[3400]: I0113 20:46:17.577392 3400 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:46:17.590222 kubelet[3400]: I0113 20:46:17.590194 3400 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:46:17.590549 kubelet[3400]: I0113 20:46:17.590535 3400 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:46:17.592513 kubelet[3400]: I0113 20:46:17.592491 3400 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:46:17.593569 kubelet[3400]: I0113 20:46:17.593498 3400 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:46:17.609932 kubelet[3400]: I0113 20:46:17.609671 3400 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:46:17.612937 kubelet[3400]: I0113 20:46:17.612908 3400 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:46:17.614746 kubelet[3400]: I0113 20:46:17.614657 3400 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:46:17.614882 kubelet[3400]: I0113 20:46:17.614872 3400 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:46:17.615222 kubelet[3400]: I0113 20:46:17.614958 3400 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:46:17.615222 kubelet[3400]: E0113 20:46:17.615019 3400 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:46:17.640641 kubelet[3400]: E0113 20:46:17.638403 3400 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:46:17.689851 kubelet[3400]: I0113 20:46:17.689794 3400 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-145" Jan 13 20:46:17.707325 kubelet[3400]: I0113 20:46:17.707294 3400 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-17-145" Jan 13 20:46:17.708949 kubelet[3400]: I0113 20:46:17.708247 3400 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-17-145" Jan 13 20:46:17.716241 kubelet[3400]: E0113 20:46:17.716195 3400 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 20:46:17.729179 kubelet[3400]: I0113 20:46:17.728403 3400 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:46:17.729179 kubelet[3400]: I0113 20:46:17.728431 3400 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:46:17.729179 kubelet[3400]: I0113 20:46:17.728456 3400 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:46:17.729179 kubelet[3400]: I0113 20:46:17.728687 3400 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:46:17.729179 kubelet[3400]: I0113 20:46:17.728725 3400 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:46:17.729179 kubelet[3400]: I0113 20:46:17.728735 3400 policy_none.go:49] "None policy: Start" Jan 13 20:46:17.732527 kubelet[3400]: I0113 20:46:17.732388 3400 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:46:17.732527 kubelet[3400]: I0113 20:46:17.732431 3400 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:46:17.733026 kubelet[3400]: I0113 20:46:17.732869 3400 state_mem.go:75] "Updated machine memory state" Jan 13 20:46:17.753494 kubelet[3400]: I0113 20:46:17.753466 3400 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:46:17.754345 kubelet[3400]: I0113 20:46:17.754311 3400 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:46:17.916472 kubelet[3400]: I0113 20:46:17.916358 3400 topology_manager.go:215] "Topology Admit Handler" podUID="410792bb940348ca24c82202f21fa371" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-17-145" Jan 13 20:46:17.916472 kubelet[3400]: I0113 20:46:17.916470 3400 topology_manager.go:215] "Topology Admit Handler" podUID="8d3ef95cc011e3c14d21c64e53b569af" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-17-145" Jan 13 20:46:17.916736 kubelet[3400]: I0113 20:46:17.916517 3400 topology_manager.go:215] "Topology Admit Handler" podUID="b682b8e1b59c958527b5f20b9fd8a476" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-17-145" Jan 13 20:46:17.994378 kubelet[3400]: I0113 20:46:17.994339 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8d3ef95cc011e3c14d21c64e53b569af-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-145\" (UID: \"8d3ef95cc011e3c14d21c64e53b569af\") " pod="kube-system/kube-controller-manager-ip-172-31-17-145" Jan 13 20:46:17.994517 kubelet[3400]: I0113 20:46:17.994397 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8d3ef95cc011e3c14d21c64e53b569af-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-145\" (UID: \"8d3ef95cc011e3c14d21c64e53b569af\") " pod="kube-system/kube-controller-manager-ip-172-31-17-145" Jan 13 20:46:17.996318 kubelet[3400]: I0113 20:46:17.996266 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8d3ef95cc011e3c14d21c64e53b569af-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-145\" (UID: \"8d3ef95cc011e3c14d21c64e53b569af\") " pod="kube-system/kube-controller-manager-ip-172-31-17-145" Jan 13 20:46:17.996495 kubelet[3400]: I0113 20:46:17.996359 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d3ef95cc011e3c14d21c64e53b569af-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-145\" (UID: \"8d3ef95cc011e3c14d21c64e53b569af\") " pod="kube-system/kube-controller-manager-ip-172-31-17-145" Jan 13 20:46:17.996495 kubelet[3400]: I0113 20:46:17.996437 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8d3ef95cc011e3c14d21c64e53b569af-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-145\" (UID: \"8d3ef95cc011e3c14d21c64e53b569af\") " pod="kube-system/kube-controller-manager-ip-172-31-17-145" Jan 13 20:46:17.996604 kubelet[3400]: I0113 20:46:17.996513 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b682b8e1b59c958527b5f20b9fd8a476-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-145\" (UID: \"b682b8e1b59c958527b5f20b9fd8a476\") " pod="kube-system/kube-scheduler-ip-172-31-17-145" Jan 13 20:46:17.996604 kubelet[3400]: I0113 20:46:17.996582 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/410792bb940348ca24c82202f21fa371-ca-certs\") pod \"kube-apiserver-ip-172-31-17-145\" (UID: \"410792bb940348ca24c82202f21fa371\") " pod="kube-system/kube-apiserver-ip-172-31-17-145" Jan 13 20:46:17.996704 kubelet[3400]: I0113 20:46:17.996612 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/410792bb940348ca24c82202f21fa371-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-145\" (UID: \"410792bb940348ca24c82202f21fa371\") " pod="kube-system/kube-apiserver-ip-172-31-17-145" Jan 13 20:46:17.996750 kubelet[3400]: I0113 20:46:17.996696 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/410792bb940348ca24c82202f21fa371-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-145\" (UID: \"410792bb940348ca24c82202f21fa371\") " pod="kube-system/kube-apiserver-ip-172-31-17-145" Jan 13 20:46:18.310437 sudo[3413]: pam_unix(sudo:session): session closed for user root Jan 13 20:46:18.522953 kubelet[3400]: I0113 20:46:18.522904 3400 apiserver.go:52] "Watching apiserver" Jan 13 20:46:18.590914 kubelet[3400]: I0113 20:46:18.590788 3400 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:46:18.668434 kubelet[3400]: E0113 20:46:18.668400 3400 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-17-145\" already exists" pod="kube-system/kube-scheduler-ip-172-31-17-145" Jan 13 20:46:18.696965 kubelet[3400]: I0113 20:46:18.696923 3400 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-17-145" podStartSLOduration=1.696865783 podStartE2EDuration="1.696865783s" podCreationTimestamp="2025-01-13 20:46:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:46:18.693849539 +0000 UTC m=+1.306458398" watchObservedRunningTime="2025-01-13 20:46:18.696865783 +0000 UTC m=+1.309474630" Jan 13 20:46:18.720823 kubelet[3400]: I0113 20:46:18.720778 3400 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-17-145" podStartSLOduration=1.720711209 podStartE2EDuration="1.720711209s" podCreationTimestamp="2025-01-13 20:46:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:46:18.710198843 +0000 UTC m=+1.322807695" watchObservedRunningTime="2025-01-13 20:46:18.720711209 +0000 UTC m=+1.333320070" Jan 13 20:46:20.375304 sudo[2231]: pam_unix(sudo:session): session closed for user root Jan 13 20:46:20.397516 sshd[2230]: Connection closed by 139.178.89.65 port 47844 Jan 13 20:46:20.398920 sshd-session[2228]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:20.404848 systemd-logind[1872]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:46:20.408461 systemd[1]: sshd@8-172.31.17.145:22-139.178.89.65:47844.service: Deactivated successfully. Jan 13 20:46:20.411700 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:46:20.411942 systemd[1]: session-9.scope: Consumed 5.346s CPU time, 183.5M memory peak, 0B memory swap peak. Jan 13 20:46:20.413684 systemd-logind[1872]: Removed session 9. Jan 13 20:46:21.187971 kubelet[3400]: I0113 20:46:21.187916 3400 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-17-145" podStartSLOduration=4.187437442 podStartE2EDuration="4.187437442s" podCreationTimestamp="2025-01-13 20:46:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:46:18.721539474 +0000 UTC m=+1.334148329" watchObservedRunningTime="2025-01-13 20:46:21.187437442 +0000 UTC m=+3.800046302" Jan 13 20:46:30.121012 kubelet[3400]: I0113 20:46:30.120679 3400 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:46:30.123550 containerd[1893]: time="2025-01-13T20:46:30.122685971Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:46:30.123986 kubelet[3400]: I0113 20:46:30.123126 3400 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:46:30.142863 kubelet[3400]: I0113 20:46:30.142540 3400 topology_manager.go:215] "Topology Admit Handler" podUID="4bf57b2b-89cf-4644-80ad-fd6dab3040eb" podNamespace="kube-system" podName="cilium-ncw6j" Jan 13 20:46:30.147827 kubelet[3400]: I0113 20:46:30.147437 3400 topology_manager.go:215] "Topology Admit Handler" podUID="786ac818-906f-4175-b0ed-ad399d755189" podNamespace="kube-system" podName="kube-proxy-bqnlw" Jan 13 20:46:30.164738 systemd[1]: Created slice kubepods-burstable-pod4bf57b2b_89cf_4644_80ad_fd6dab3040eb.slice - libcontainer container kubepods-burstable-pod4bf57b2b_89cf_4644_80ad_fd6dab3040eb.slice. Jan 13 20:46:30.170748 systemd[1]: Created slice kubepods-besteffort-pod786ac818_906f_4175_b0ed_ad399d755189.slice - libcontainer container kubepods-besteffort-pod786ac818_906f_4175_b0ed_ad399d755189.slice. Jan 13 20:46:30.285602 kubelet[3400]: I0113 20:46:30.285552 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/786ac818-906f-4175-b0ed-ad399d755189-xtables-lock\") pod \"kube-proxy-bqnlw\" (UID: \"786ac818-906f-4175-b0ed-ad399d755189\") " pod="kube-system/kube-proxy-bqnlw" Jan 13 20:46:30.285602 kubelet[3400]: I0113 20:46:30.285607 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-cilium-config-path\") pod \"cilium-ncw6j\" (UID: \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\") " pod="kube-system/cilium-ncw6j" Jan 13 20:46:30.285816 kubelet[3400]: I0113 20:46:30.285636 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-host-proc-sys-net\") pod \"cilium-ncw6j\" (UID: \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\") " pod="kube-system/cilium-ncw6j" Jan 13 20:46:30.285816 kubelet[3400]: I0113 20:46:30.285661 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/786ac818-906f-4175-b0ed-ad399d755189-kube-proxy\") pod \"kube-proxy-bqnlw\" (UID: \"786ac818-906f-4175-b0ed-ad399d755189\") " pod="kube-system/kube-proxy-bqnlw" Jan 13 20:46:30.285816 kubelet[3400]: I0113 20:46:30.285690 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-cni-path\") pod \"cilium-ncw6j\" (UID: \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\") " pod="kube-system/cilium-ncw6j" Jan 13 20:46:30.285816 kubelet[3400]: I0113 20:46:30.285718 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-clustermesh-secrets\") pod \"cilium-ncw6j\" (UID: \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\") " pod="kube-system/cilium-ncw6j" Jan 13 20:46:30.285816 kubelet[3400]: I0113 20:46:30.285746 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dzk2\" (UniqueName: \"kubernetes.io/projected/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-kube-api-access-5dzk2\") pod \"cilium-ncw6j\" (UID: \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\") " pod="kube-system/cilium-ncw6j" Jan 13 20:46:30.285816 kubelet[3400]: I0113 20:46:30.285774 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-cilium-run\") pod \"cilium-ncw6j\" (UID: \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\") " pod="kube-system/cilium-ncw6j" Jan 13 20:46:30.286077 kubelet[3400]: I0113 20:46:30.285809 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-bpf-maps\") pod \"cilium-ncw6j\" (UID: \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\") " pod="kube-system/cilium-ncw6j" Jan 13 20:46:30.286077 kubelet[3400]: I0113 20:46:30.285840 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-hostproc\") pod \"cilium-ncw6j\" (UID: \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\") " pod="kube-system/cilium-ncw6j" Jan 13 20:46:30.286077 kubelet[3400]: I0113 20:46:30.285875 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-xtables-lock\") pod \"cilium-ncw6j\" (UID: \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\") " pod="kube-system/cilium-ncw6j" Jan 13 20:46:30.286077 kubelet[3400]: I0113 20:46:30.285907 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-cilium-cgroup\") pod \"cilium-ncw6j\" (UID: \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\") " pod="kube-system/cilium-ncw6j" Jan 13 20:46:30.286077 kubelet[3400]: I0113 20:46:30.285938 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-lib-modules\") pod \"cilium-ncw6j\" (UID: \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\") " pod="kube-system/cilium-ncw6j" Jan 13 20:46:30.286077 kubelet[3400]: I0113 20:46:30.285972 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-host-proc-sys-kernel\") pod \"cilium-ncw6j\" (UID: \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\") " pod="kube-system/cilium-ncw6j" Jan 13 20:46:30.286374 kubelet[3400]: I0113 20:46:30.286005 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-etc-cni-netd\") pod \"cilium-ncw6j\" (UID: \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\") " pod="kube-system/cilium-ncw6j" Jan 13 20:46:30.286374 kubelet[3400]: I0113 20:46:30.286039 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbxcx\" (UniqueName: \"kubernetes.io/projected/786ac818-906f-4175-b0ed-ad399d755189-kube-api-access-qbxcx\") pod \"kube-proxy-bqnlw\" (UID: \"786ac818-906f-4175-b0ed-ad399d755189\") " pod="kube-system/kube-proxy-bqnlw" Jan 13 20:46:30.286374 kubelet[3400]: I0113 20:46:30.286079 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-hubble-tls\") pod \"cilium-ncw6j\" (UID: \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\") " pod="kube-system/cilium-ncw6j" Jan 13 20:46:30.286374 kubelet[3400]: I0113 20:46:30.286111 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/786ac818-906f-4175-b0ed-ad399d755189-lib-modules\") pod \"kube-proxy-bqnlw\" (UID: \"786ac818-906f-4175-b0ed-ad399d755189\") " pod="kube-system/kube-proxy-bqnlw" Jan 13 20:46:30.306029 kubelet[3400]: I0113 20:46:30.304628 3400 topology_manager.go:215] "Topology Admit Handler" podUID="6cd00c41-6ab1-4131-9007-a186b552cd12" podNamespace="kube-system" podName="cilium-operator-5cc964979-jkhxt" Jan 13 20:46:30.320207 systemd[1]: Created slice kubepods-besteffort-pod6cd00c41_6ab1_4131_9007_a186b552cd12.slice - libcontainer container kubepods-besteffort-pod6cd00c41_6ab1_4131_9007_a186b552cd12.slice. Jan 13 20:46:30.476125 containerd[1893]: time="2025-01-13T20:46:30.475998778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ncw6j,Uid:4bf57b2b-89cf-4644-80ad-fd6dab3040eb,Namespace:kube-system,Attempt:0,}" Jan 13 20:46:30.485201 containerd[1893]: time="2025-01-13T20:46:30.484392162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bqnlw,Uid:786ac818-906f-4175-b0ed-ad399d755189,Namespace:kube-system,Attempt:0,}" Jan 13 20:46:30.486897 kubelet[3400]: I0113 20:46:30.486818 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t7w5\" (UniqueName: \"kubernetes.io/projected/6cd00c41-6ab1-4131-9007-a186b552cd12-kube-api-access-4t7w5\") pod \"cilium-operator-5cc964979-jkhxt\" (UID: \"6cd00c41-6ab1-4131-9007-a186b552cd12\") " pod="kube-system/cilium-operator-5cc964979-jkhxt" Jan 13 20:46:30.486897 kubelet[3400]: I0113 20:46:30.486872 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6cd00c41-6ab1-4131-9007-a186b552cd12-cilium-config-path\") pod \"cilium-operator-5cc964979-jkhxt\" (UID: \"6cd00c41-6ab1-4131-9007-a186b552cd12\") " pod="kube-system/cilium-operator-5cc964979-jkhxt" Jan 13 20:46:30.539812 containerd[1893]: time="2025-01-13T20:46:30.539685725Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:46:30.539812 containerd[1893]: time="2025-01-13T20:46:30.539762036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:46:30.539812 containerd[1893]: time="2025-01-13T20:46:30.539783338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:30.540264 containerd[1893]: time="2025-01-13T20:46:30.539893796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:30.542077 containerd[1893]: time="2025-01-13T20:46:30.541407800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:46:30.542077 containerd[1893]: time="2025-01-13T20:46:30.541471751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:46:30.542077 containerd[1893]: time="2025-01-13T20:46:30.541496918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:30.542077 containerd[1893]: time="2025-01-13T20:46:30.541590908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:30.577368 systemd[1]: Started cri-containerd-2447edc190c16433e57ab26cd0bb35f9766667fc79040e1c2d14dfce0d4f380b.scope - libcontainer container 2447edc190c16433e57ab26cd0bb35f9766667fc79040e1c2d14dfce0d4f380b. Jan 13 20:46:30.583392 systemd[1]: Started cri-containerd-6c1328c6bfdfe02de34e5cd11dea9df6077711d2f135e6d60447ec72fb4b8d5c.scope - libcontainer container 6c1328c6bfdfe02de34e5cd11dea9df6077711d2f135e6d60447ec72fb4b8d5c. Jan 13 20:46:30.627591 containerd[1893]: time="2025-01-13T20:46:30.627219908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-jkhxt,Uid:6cd00c41-6ab1-4131-9007-a186b552cd12,Namespace:kube-system,Attempt:0,}" Jan 13 20:46:30.643814 containerd[1893]: time="2025-01-13T20:46:30.643767393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ncw6j,Uid:4bf57b2b-89cf-4644-80ad-fd6dab3040eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c1328c6bfdfe02de34e5cd11dea9df6077711d2f135e6d60447ec72fb4b8d5c\"" Jan 13 20:46:30.648245 containerd[1893]: time="2025-01-13T20:46:30.647980667Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 20:46:30.663192 containerd[1893]: time="2025-01-13T20:46:30.663034118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bqnlw,Uid:786ac818-906f-4175-b0ed-ad399d755189,Namespace:kube-system,Attempt:0,} returns sandbox id \"2447edc190c16433e57ab26cd0bb35f9766667fc79040e1c2d14dfce0d4f380b\"" Jan 13 20:46:30.667947 containerd[1893]: time="2025-01-13T20:46:30.667497524Z" level=info msg="CreateContainer within sandbox \"2447edc190c16433e57ab26cd0bb35f9766667fc79040e1c2d14dfce0d4f380b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:46:30.681625 containerd[1893]: time="2025-01-13T20:46:30.681197646Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:46:30.681625 containerd[1893]: time="2025-01-13T20:46:30.681314205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:46:30.681625 containerd[1893]: time="2025-01-13T20:46:30.681335074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:30.681625 containerd[1893]: time="2025-01-13T20:46:30.681484294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:30.697618 containerd[1893]: time="2025-01-13T20:46:30.697547166Z" level=info msg="CreateContainer within sandbox \"2447edc190c16433e57ab26cd0bb35f9766667fc79040e1c2d14dfce0d4f380b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b6b7e673ccf757c8a81c59b8d23bea981fa264d0505f0ec73eb48d686f2049bf\"" Jan 13 20:46:30.699575 containerd[1893]: time="2025-01-13T20:46:30.699449319Z" level=info msg="StartContainer for \"b6b7e673ccf757c8a81c59b8d23bea981fa264d0505f0ec73eb48d686f2049bf\"" Jan 13 20:46:30.709669 systemd[1]: Started cri-containerd-112766182759871a94929d527970ba024cab3c0db569d7b0aca61e5a6455f59b.scope - libcontainer container 112766182759871a94929d527970ba024cab3c0db569d7b0aca61e5a6455f59b. Jan 13 20:46:30.757424 systemd[1]: Started cri-containerd-b6b7e673ccf757c8a81c59b8d23bea981fa264d0505f0ec73eb48d686f2049bf.scope - libcontainer container b6b7e673ccf757c8a81c59b8d23bea981fa264d0505f0ec73eb48d686f2049bf. Jan 13 20:46:30.794398 containerd[1893]: time="2025-01-13T20:46:30.794337474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-jkhxt,Uid:6cd00c41-6ab1-4131-9007-a186b552cd12,Namespace:kube-system,Attempt:0,} returns sandbox id \"112766182759871a94929d527970ba024cab3c0db569d7b0aca61e5a6455f59b\"" Jan 13 20:46:30.819281 containerd[1893]: time="2025-01-13T20:46:30.819232398Z" level=info msg="StartContainer for \"b6b7e673ccf757c8a81c59b8d23bea981fa264d0505f0ec73eb48d686f2049bf\" returns successfully" Jan 13 20:46:31.712871 kubelet[3400]: I0113 20:46:31.712305 3400 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-bqnlw" podStartSLOduration=1.712257293 podStartE2EDuration="1.712257293s" podCreationTimestamp="2025-01-13 20:46:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:46:31.712164591 +0000 UTC m=+14.324773451" watchObservedRunningTime="2025-01-13 20:46:31.712257293 +0000 UTC m=+14.324866150" Jan 13 20:46:39.969371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount415648107.mount: Deactivated successfully. Jan 13 20:46:42.813656 containerd[1893]: time="2025-01-13T20:46:42.813602981Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:42.814839 containerd[1893]: time="2025-01-13T20:46:42.814788064Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735339" Jan 13 20:46:42.847172 containerd[1893]: time="2025-01-13T20:46:42.846339468Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:42.847951 containerd[1893]: time="2025-01-13T20:46:42.847912825Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.199883855s" Jan 13 20:46:42.848092 containerd[1893]: time="2025-01-13T20:46:42.848069048Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 13 20:46:42.848896 containerd[1893]: time="2025-01-13T20:46:42.848870586Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 20:46:42.851484 containerd[1893]: time="2025-01-13T20:46:42.851451287Z" level=info msg="CreateContainer within sandbox \"6c1328c6bfdfe02de34e5cd11dea9df6077711d2f135e6d60447ec72fb4b8d5c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:46:43.059298 containerd[1893]: time="2025-01-13T20:46:43.059136510Z" level=info msg="CreateContainer within sandbox \"6c1328c6bfdfe02de34e5cd11dea9df6077711d2f135e6d60447ec72fb4b8d5c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cce9de87451216095eb308495639daa33e3618bf5965f959bff1ac508d25d907\"" Jan 13 20:46:43.060107 containerd[1893]: time="2025-01-13T20:46:43.060072658Z" level=info msg="StartContainer for \"cce9de87451216095eb308495639daa33e3618bf5965f959bff1ac508d25d907\"" Jan 13 20:46:43.381541 systemd[1]: run-containerd-runc-k8s.io-cce9de87451216095eb308495639daa33e3618bf5965f959bff1ac508d25d907-runc.v84NbJ.mount: Deactivated successfully. Jan 13 20:46:43.389909 systemd[1]: Started cri-containerd-cce9de87451216095eb308495639daa33e3618bf5965f959bff1ac508d25d907.scope - libcontainer container cce9de87451216095eb308495639daa33e3618bf5965f959bff1ac508d25d907. Jan 13 20:46:43.427372 containerd[1893]: time="2025-01-13T20:46:43.427220791Z" level=info msg="StartContainer for \"cce9de87451216095eb308495639daa33e3618bf5965f959bff1ac508d25d907\" returns successfully" Jan 13 20:46:43.442754 systemd[1]: cri-containerd-cce9de87451216095eb308495639daa33e3618bf5965f959bff1ac508d25d907.scope: Deactivated successfully. Jan 13 20:46:43.730447 containerd[1893]: time="2025-01-13T20:46:43.703561006Z" level=info msg="shim disconnected" id=cce9de87451216095eb308495639daa33e3618bf5965f959bff1ac508d25d907 namespace=k8s.io Jan 13 20:46:43.730447 containerd[1893]: time="2025-01-13T20:46:43.730370175Z" level=warning msg="cleaning up after shim disconnected" id=cce9de87451216095eb308495639daa33e3618bf5965f959bff1ac508d25d907 namespace=k8s.io Jan 13 20:46:43.730447 containerd[1893]: time="2025-01-13T20:46:43.730388827Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:46:43.744959 containerd[1893]: time="2025-01-13T20:46:43.744901454Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:46:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:46:44.054118 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cce9de87451216095eb308495639daa33e3618bf5965f959bff1ac508d25d907-rootfs.mount: Deactivated successfully. Jan 13 20:46:44.776758 containerd[1893]: time="2025-01-13T20:46:44.776715773Z" level=info msg="CreateContainer within sandbox \"6c1328c6bfdfe02de34e5cd11dea9df6077711d2f135e6d60447ec72fb4b8d5c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:46:44.801119 containerd[1893]: time="2025-01-13T20:46:44.796506308Z" level=info msg="CreateContainer within sandbox \"6c1328c6bfdfe02de34e5cd11dea9df6077711d2f135e6d60447ec72fb4b8d5c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c1c573a678aa73481ce9faf2ea44bddca851e885452f13f3fa021b6a5c7b6357\"" Jan 13 20:46:44.801119 containerd[1893]: time="2025-01-13T20:46:44.799665904Z" level=info msg="StartContainer for \"c1c573a678aa73481ce9faf2ea44bddca851e885452f13f3fa021b6a5c7b6357\"" Jan 13 20:46:44.851379 systemd[1]: Started cri-containerd-c1c573a678aa73481ce9faf2ea44bddca851e885452f13f3fa021b6a5c7b6357.scope - libcontainer container c1c573a678aa73481ce9faf2ea44bddca851e885452f13f3fa021b6a5c7b6357. Jan 13 20:46:44.890049 containerd[1893]: time="2025-01-13T20:46:44.890005296Z" level=info msg="StartContainer for \"c1c573a678aa73481ce9faf2ea44bddca851e885452f13f3fa021b6a5c7b6357\" returns successfully" Jan 13 20:46:44.904202 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:46:44.904564 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:46:44.904664 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:46:44.914629 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:46:44.914911 systemd[1]: cri-containerd-c1c573a678aa73481ce9faf2ea44bddca851e885452f13f3fa021b6a5c7b6357.scope: Deactivated successfully. Jan 13 20:46:44.950360 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:46:44.965951 containerd[1893]: time="2025-01-13T20:46:44.965874026Z" level=info msg="shim disconnected" id=c1c573a678aa73481ce9faf2ea44bddca851e885452f13f3fa021b6a5c7b6357 namespace=k8s.io Jan 13 20:46:44.965951 containerd[1893]: time="2025-01-13T20:46:44.965944425Z" level=warning msg="cleaning up after shim disconnected" id=c1c573a678aa73481ce9faf2ea44bddca851e885452f13f3fa021b6a5c7b6357 namespace=k8s.io Jan 13 20:46:44.965951 containerd[1893]: time="2025-01-13T20:46:44.965956686Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:46:45.052838 systemd[1]: run-containerd-runc-k8s.io-c1c573a678aa73481ce9faf2ea44bddca851e885452f13f3fa021b6a5c7b6357-runc.WdxYar.mount: Deactivated successfully. Jan 13 20:46:45.053401 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1c573a678aa73481ce9faf2ea44bddca851e885452f13f3fa021b6a5c7b6357-rootfs.mount: Deactivated successfully. Jan 13 20:46:45.785660 containerd[1893]: time="2025-01-13T20:46:45.785615180Z" level=info msg="CreateContainer within sandbox \"6c1328c6bfdfe02de34e5cd11dea9df6077711d2f135e6d60447ec72fb4b8d5c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:46:45.832657 containerd[1893]: time="2025-01-13T20:46:45.832603797Z" level=info msg="CreateContainer within sandbox \"6c1328c6bfdfe02de34e5cd11dea9df6077711d2f135e6d60447ec72fb4b8d5c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d5e93cfa681e542cbff3e44dd06f993776935360b695745f4937767220a8d5f7\"" Jan 13 20:46:45.834372 containerd[1893]: time="2025-01-13T20:46:45.833167804Z" level=info msg="StartContainer for \"d5e93cfa681e542cbff3e44dd06f993776935360b695745f4937767220a8d5f7\"" Jan 13 20:46:45.890394 systemd[1]: Started cri-containerd-d5e93cfa681e542cbff3e44dd06f993776935360b695745f4937767220a8d5f7.scope - libcontainer container d5e93cfa681e542cbff3e44dd06f993776935360b695745f4937767220a8d5f7. Jan 13 20:46:45.931573 containerd[1893]: time="2025-01-13T20:46:45.931096318Z" level=info msg="StartContainer for \"d5e93cfa681e542cbff3e44dd06f993776935360b695745f4937767220a8d5f7\" returns successfully" Jan 13 20:46:45.935470 systemd[1]: cri-containerd-d5e93cfa681e542cbff3e44dd06f993776935360b695745f4937767220a8d5f7.scope: Deactivated successfully. Jan 13 20:46:45.968543 containerd[1893]: time="2025-01-13T20:46:45.968465657Z" level=info msg="shim disconnected" id=d5e93cfa681e542cbff3e44dd06f993776935360b695745f4937767220a8d5f7 namespace=k8s.io Jan 13 20:46:45.968543 containerd[1893]: time="2025-01-13T20:46:45.968533227Z" level=warning msg="cleaning up after shim disconnected" id=d5e93cfa681e542cbff3e44dd06f993776935360b695745f4937767220a8d5f7 namespace=k8s.io Jan 13 20:46:45.968543 containerd[1893]: time="2025-01-13T20:46:45.968544519Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:46:46.053243 systemd[1]: run-containerd-runc-k8s.io-d5e93cfa681e542cbff3e44dd06f993776935360b695745f4937767220a8d5f7-runc.oRuKQe.mount: Deactivated successfully. Jan 13 20:46:46.053383 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5e93cfa681e542cbff3e44dd06f993776935360b695745f4937767220a8d5f7-rootfs.mount: Deactivated successfully. Jan 13 20:46:46.790897 containerd[1893]: time="2025-01-13T20:46:46.790796324Z" level=info msg="CreateContainer within sandbox \"6c1328c6bfdfe02de34e5cd11dea9df6077711d2f135e6d60447ec72fb4b8d5c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:46:46.836337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2984419105.mount: Deactivated successfully. Jan 13 20:46:46.837624 containerd[1893]: time="2025-01-13T20:46:46.837506291Z" level=info msg="CreateContainer within sandbox \"6c1328c6bfdfe02de34e5cd11dea9df6077711d2f135e6d60447ec72fb4b8d5c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d52ac7b25b54c005815cf34d0b19e9c2efce2e365a49881c7d3682f272b1115b\"" Jan 13 20:46:46.839097 containerd[1893]: time="2025-01-13T20:46:46.838344727Z" level=info msg="StartContainer for \"d52ac7b25b54c005815cf34d0b19e9c2efce2e365a49881c7d3682f272b1115b\"" Jan 13 20:46:46.884354 systemd[1]: Started cri-containerd-d52ac7b25b54c005815cf34d0b19e9c2efce2e365a49881c7d3682f272b1115b.scope - libcontainer container d52ac7b25b54c005815cf34d0b19e9c2efce2e365a49881c7d3682f272b1115b. Jan 13 20:46:46.916912 systemd[1]: cri-containerd-d52ac7b25b54c005815cf34d0b19e9c2efce2e365a49881c7d3682f272b1115b.scope: Deactivated successfully. Jan 13 20:46:46.919571 containerd[1893]: time="2025-01-13T20:46:46.919456451Z" level=info msg="StartContainer for \"d52ac7b25b54c005815cf34d0b19e9c2efce2e365a49881c7d3682f272b1115b\" returns successfully" Jan 13 20:46:46.958907 containerd[1893]: time="2025-01-13T20:46:46.958839016Z" level=info msg="shim disconnected" id=d52ac7b25b54c005815cf34d0b19e9c2efce2e365a49881c7d3682f272b1115b namespace=k8s.io Jan 13 20:46:46.958907 containerd[1893]: time="2025-01-13T20:46:46.958903570Z" level=warning msg="cleaning up after shim disconnected" id=d52ac7b25b54c005815cf34d0b19e9c2efce2e365a49881c7d3682f272b1115b namespace=k8s.io Jan 13 20:46:46.958907 containerd[1893]: time="2025-01-13T20:46:46.958914806Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:46:47.052472 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d52ac7b25b54c005815cf34d0b19e9c2efce2e365a49881c7d3682f272b1115b-rootfs.mount: Deactivated successfully. Jan 13 20:46:47.799017 containerd[1893]: time="2025-01-13T20:46:47.798974787Z" level=info msg="CreateContainer within sandbox \"6c1328c6bfdfe02de34e5cd11dea9df6077711d2f135e6d60447ec72fb4b8d5c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:46:47.853273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1442159982.mount: Deactivated successfully. Jan 13 20:46:47.856353 containerd[1893]: time="2025-01-13T20:46:47.855951095Z" level=info msg="CreateContainer within sandbox \"6c1328c6bfdfe02de34e5cd11dea9df6077711d2f135e6d60447ec72fb4b8d5c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ce357a251ada8d33eca18247622fba066834c8f01258ee275ad8afdde126dfba\"" Jan 13 20:46:47.858972 containerd[1893]: time="2025-01-13T20:46:47.858940711Z" level=info msg="StartContainer for \"ce357a251ada8d33eca18247622fba066834c8f01258ee275ad8afdde126dfba\"" Jan 13 20:46:47.907390 systemd[1]: Started cri-containerd-ce357a251ada8d33eca18247622fba066834c8f01258ee275ad8afdde126dfba.scope - libcontainer container ce357a251ada8d33eca18247622fba066834c8f01258ee275ad8afdde126dfba. Jan 13 20:46:47.946325 containerd[1893]: time="2025-01-13T20:46:47.946241863Z" level=info msg="StartContainer for \"ce357a251ada8d33eca18247622fba066834c8f01258ee275ad8afdde126dfba\" returns successfully" Jan 13 20:46:48.204539 kubelet[3400]: I0113 20:46:48.204430 3400 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 20:46:48.250717 kubelet[3400]: I0113 20:46:48.250671 3400 topology_manager.go:215] "Topology Admit Handler" podUID="65db7c41-5ea2-4551-b363-3b243862d863" podNamespace="kube-system" podName="coredns-76f75df574-2kg7z" Jan 13 20:46:48.256834 kubelet[3400]: I0113 20:46:48.256798 3400 topology_manager.go:215] "Topology Admit Handler" podUID="d0e18256-5ab2-44bb-8867-03771b5ef763" podNamespace="kube-system" podName="coredns-76f75df574-p4v68" Jan 13 20:46:48.264576 systemd[1]: Created slice kubepods-burstable-pod65db7c41_5ea2_4551_b363_3b243862d863.slice - libcontainer container kubepods-burstable-pod65db7c41_5ea2_4551_b363_3b243862d863.slice. Jan 13 20:46:48.277780 systemd[1]: Created slice kubepods-burstable-podd0e18256_5ab2_44bb_8867_03771b5ef763.slice - libcontainer container kubepods-burstable-podd0e18256_5ab2_44bb_8867_03771b5ef763.slice. Jan 13 20:46:48.340412 kubelet[3400]: I0113 20:46:48.340375 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65db7c41-5ea2-4551-b363-3b243862d863-config-volume\") pod \"coredns-76f75df574-2kg7z\" (UID: \"65db7c41-5ea2-4551-b363-3b243862d863\") " pod="kube-system/coredns-76f75df574-2kg7z" Jan 13 20:46:48.340578 kubelet[3400]: I0113 20:46:48.340435 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qflvj\" (UniqueName: \"kubernetes.io/projected/65db7c41-5ea2-4551-b363-3b243862d863-kube-api-access-qflvj\") pod \"coredns-76f75df574-2kg7z\" (UID: \"65db7c41-5ea2-4551-b363-3b243862d863\") " pod="kube-system/coredns-76f75df574-2kg7z" Jan 13 20:46:48.441050 kubelet[3400]: I0113 20:46:48.441006 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt4fk\" (UniqueName: \"kubernetes.io/projected/d0e18256-5ab2-44bb-8867-03771b5ef763-kube-api-access-wt4fk\") pod \"coredns-76f75df574-p4v68\" (UID: \"d0e18256-5ab2-44bb-8867-03771b5ef763\") " pod="kube-system/coredns-76f75df574-p4v68" Jan 13 20:46:48.441050 kubelet[3400]: I0113 20:46:48.441061 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0e18256-5ab2-44bb-8867-03771b5ef763-config-volume\") pod \"coredns-76f75df574-p4v68\" (UID: \"d0e18256-5ab2-44bb-8867-03771b5ef763\") " pod="kube-system/coredns-76f75df574-p4v68" Jan 13 20:46:48.575358 containerd[1893]: time="2025-01-13T20:46:48.575293559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2kg7z,Uid:65db7c41-5ea2-4551-b363-3b243862d863,Namespace:kube-system,Attempt:0,}" Jan 13 20:46:48.585536 containerd[1893]: time="2025-01-13T20:46:48.585228419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-p4v68,Uid:d0e18256-5ab2-44bb-8867-03771b5ef763,Namespace:kube-system,Attempt:0,}" Jan 13 20:46:55.670730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount160655857.mount: Deactivated successfully. Jan 13 20:46:56.316824 containerd[1893]: time="2025-01-13T20:46:56.316708378Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:56.318706 containerd[1893]: time="2025-01-13T20:46:56.318556321Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18906553" Jan 13 20:46:56.320243 containerd[1893]: time="2025-01-13T20:46:56.320179718Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:56.322669 containerd[1893]: time="2025-01-13T20:46:56.322075464Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 13.472738322s" Jan 13 20:46:56.322669 containerd[1893]: time="2025-01-13T20:46:56.322137023Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 13 20:46:56.328229 containerd[1893]: time="2025-01-13T20:46:56.327288967Z" level=info msg="CreateContainer within sandbox \"112766182759871a94929d527970ba024cab3c0db569d7b0aca61e5a6455f59b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 20:46:56.349788 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2445723617.mount: Deactivated successfully. Jan 13 20:46:56.351396 containerd[1893]: time="2025-01-13T20:46:56.351165351Z" level=info msg="CreateContainer within sandbox \"112766182759871a94929d527970ba024cab3c0db569d7b0aca61e5a6455f59b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"933ce6e07dee2bc4e5d22710f3a5eab4a5cc4d7dc538f6e49f5031d4868be35f\"" Jan 13 20:46:56.352310 containerd[1893]: time="2025-01-13T20:46:56.352278290Z" level=info msg="StartContainer for \"933ce6e07dee2bc4e5d22710f3a5eab4a5cc4d7dc538f6e49f5031d4868be35f\"" Jan 13 20:46:56.391534 systemd[1]: Started cri-containerd-933ce6e07dee2bc4e5d22710f3a5eab4a5cc4d7dc538f6e49f5031d4868be35f.scope - libcontainer container 933ce6e07dee2bc4e5d22710f3a5eab4a5cc4d7dc538f6e49f5031d4868be35f. Jan 13 20:46:56.425135 containerd[1893]: time="2025-01-13T20:46:56.425083397Z" level=info msg="StartContainer for \"933ce6e07dee2bc4e5d22710f3a5eab4a5cc4d7dc538f6e49f5031d4868be35f\" returns successfully" Jan 13 20:46:56.852308 kubelet[3400]: I0113 20:46:56.851768 3400 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-ncw6j" podStartSLOduration=14.650308385 podStartE2EDuration="26.851702261s" podCreationTimestamp="2025-01-13 20:46:30 +0000 UTC" firstStartedPulling="2025-01-13 20:46:30.647220144 +0000 UTC m=+13.259828992" lastFinishedPulling="2025-01-13 20:46:42.848614012 +0000 UTC m=+25.461222868" observedRunningTime="2025-01-13 20:46:48.82698089 +0000 UTC m=+31.439589749" watchObservedRunningTime="2025-01-13 20:46:56.851702261 +0000 UTC m=+39.464311119" Jan 13 20:47:00.454832 systemd-networkd[1729]: cilium_host: Link UP Jan 13 20:47:00.464458 systemd-networkd[1729]: cilium_net: Link UP Jan 13 20:47:00.466572 systemd-networkd[1729]: cilium_net: Gained carrier Jan 13 20:47:00.469269 systemd-networkd[1729]: cilium_host: Gained carrier Jan 13 20:47:00.488121 systemd-networkd[1729]: cilium_net: Gained IPv6LL Jan 13 20:47:00.489460 systemd-networkd[1729]: cilium_host: Gained IPv6LL Jan 13 20:47:00.499466 (udev-worker)[4224]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:47:00.500020 (udev-worker)[4225]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:47:00.667077 (udev-worker)[4222]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:47:00.679520 systemd-networkd[1729]: cilium_vxlan: Link UP Jan 13 20:47:00.679529 systemd-networkd[1729]: cilium_vxlan: Gained carrier Jan 13 20:47:02.066194 kernel: NET: Registered PF_ALG protocol family Jan 13 20:47:02.730396 systemd-networkd[1729]: cilium_vxlan: Gained IPv6LL Jan 13 20:47:03.830948 (udev-worker)[4242]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:47:03.832007 systemd-networkd[1729]: lxc_health: Link UP Jan 13 20:47:03.838882 systemd-networkd[1729]: lxc_health: Gained carrier Jan 13 20:47:04.230203 systemd-networkd[1729]: lxcfba556393801: Link UP Jan 13 20:47:04.234795 kernel: eth0: renamed from tmp6d006 Jan 13 20:47:04.243503 systemd-networkd[1729]: lxcfba556393801: Gained carrier Jan 13 20:47:04.272038 systemd-networkd[1729]: lxc7fadf48e84ae: Link UP Jan 13 20:47:04.279473 kernel: eth0: renamed from tmp0aef1 Jan 13 20:47:04.290245 systemd-networkd[1729]: lxc7fadf48e84ae: Gained carrier Jan 13 20:47:04.512112 kubelet[3400]: I0113 20:47:04.511987 3400 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-jkhxt" podStartSLOduration=8.984442414 podStartE2EDuration="34.511879266s" podCreationTimestamp="2025-01-13 20:46:30 +0000 UTC" firstStartedPulling="2025-01-13 20:46:30.795738864 +0000 UTC m=+13.408347711" lastFinishedPulling="2025-01-13 20:46:56.323175721 +0000 UTC m=+38.935784563" observedRunningTime="2025-01-13 20:46:56.853369328 +0000 UTC m=+39.465978191" watchObservedRunningTime="2025-01-13 20:47:04.511879266 +0000 UTC m=+47.124488125" Jan 13 20:47:04.748550 systemd[1]: Started sshd@9-172.31.17.145:22-139.178.89.65:33864.service - OpenSSH per-connection server daemon (139.178.89.65:33864). Jan 13 20:47:05.034634 sshd[4575]: Accepted publickey for core from 139.178.89.65 port 33864 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:47:05.039829 sshd-session[4575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:05.072680 systemd-logind[1872]: New session 10 of user core. Jan 13 20:47:05.078438 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:47:05.416358 systemd-networkd[1729]: lxc_health: Gained IPv6LL Jan 13 20:47:05.480314 systemd-networkd[1729]: lxcfba556393801: Gained IPv6LL Jan 13 20:47:05.864352 systemd-networkd[1729]: lxc7fadf48e84ae: Gained IPv6LL Jan 13 20:47:06.351623 sshd[4579]: Connection closed by 139.178.89.65 port 33864 Jan 13 20:47:06.353550 sshd-session[4575]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:06.358819 systemd-logind[1872]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:47:06.362680 systemd[1]: sshd@9-172.31.17.145:22-139.178.89.65:33864.service: Deactivated successfully. Jan 13 20:47:06.368536 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:47:06.370936 systemd-logind[1872]: Removed session 10. Jan 13 20:47:08.406005 ntpd[1867]: Listen normally on 8 cilium_host 192.168.0.91:123 Jan 13 20:47:08.408999 ntpd[1867]: 13 Jan 20:47:08 ntpd[1867]: Listen normally on 8 cilium_host 192.168.0.91:123 Jan 13 20:47:08.408999 ntpd[1867]: 13 Jan 20:47:08 ntpd[1867]: Listen normally on 9 cilium_net [fe80::9c35:eaff:fea7:dbde%4]:123 Jan 13 20:47:08.408999 ntpd[1867]: 13 Jan 20:47:08 ntpd[1867]: Listen normally on 10 cilium_host [fe80::9054:dbff:feff:fee5%5]:123 Jan 13 20:47:08.408999 ntpd[1867]: 13 Jan 20:47:08 ntpd[1867]: Listen normally on 11 cilium_vxlan [fe80::dc92:5bff:fec8:78c2%6]:123 Jan 13 20:47:08.408999 ntpd[1867]: 13 Jan 20:47:08 ntpd[1867]: Listen normally on 12 lxc_health [fe80::c861:1dff:fe5e:f743%8]:123 Jan 13 20:47:08.408999 ntpd[1867]: 13 Jan 20:47:08 ntpd[1867]: Listen normally on 13 lxcfba556393801 [fe80::c49f:99ff:fe53:3110%10]:123 Jan 13 20:47:08.408999 ntpd[1867]: 13 Jan 20:47:08 ntpd[1867]: Listen normally on 14 lxc7fadf48e84ae [fe80::50b4:73ff:feab:bad%12]:123 Jan 13 20:47:08.406109 ntpd[1867]: Listen normally on 9 cilium_net [fe80::9c35:eaff:fea7:dbde%4]:123 Jan 13 20:47:08.407436 ntpd[1867]: Listen normally on 10 cilium_host [fe80::9054:dbff:feff:fee5%5]:123 Jan 13 20:47:08.407515 ntpd[1867]: Listen normally on 11 cilium_vxlan [fe80::dc92:5bff:fec8:78c2%6]:123 Jan 13 20:47:08.407561 ntpd[1867]: Listen normally on 12 lxc_health [fe80::c861:1dff:fe5e:f743%8]:123 Jan 13 20:47:08.407602 ntpd[1867]: Listen normally on 13 lxcfba556393801 [fe80::c49f:99ff:fe53:3110%10]:123 Jan 13 20:47:08.407708 ntpd[1867]: Listen normally on 14 lxc7fadf48e84ae [fe80::50b4:73ff:feab:bad%12]:123 Jan 13 20:47:11.226880 containerd[1893]: time="2025-01-13T20:47:11.225347436Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:47:11.226880 containerd[1893]: time="2025-01-13T20:47:11.225445481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:47:11.226880 containerd[1893]: time="2025-01-13T20:47:11.225469938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:47:11.226880 containerd[1893]: time="2025-01-13T20:47:11.225600142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:47:11.229960 containerd[1893]: time="2025-01-13T20:47:11.228717780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:47:11.229960 containerd[1893]: time="2025-01-13T20:47:11.228887139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:47:11.229960 containerd[1893]: time="2025-01-13T20:47:11.228913387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:47:11.229960 containerd[1893]: time="2025-01-13T20:47:11.229058613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:47:11.301518 systemd[1]: Started cri-containerd-0aef10dfd2256b66427104d6b82021e960fd3abc3f3a5004379890ac1b699da9.scope - libcontainer container 0aef10dfd2256b66427104d6b82021e960fd3abc3f3a5004379890ac1b699da9. Jan 13 20:47:11.307107 systemd[1]: Started cri-containerd-6d006a8b906687d6d1c027da15a49ed878ce5c61e16d3e32316413358829d518.scope - libcontainer container 6d006a8b906687d6d1c027da15a49ed878ce5c61e16d3e32316413358829d518. Jan 13 20:47:11.400410 systemd[1]: Started sshd@10-172.31.17.145:22-139.178.89.65:49962.service - OpenSSH per-connection server daemon (139.178.89.65:49962). Jan 13 20:47:11.465127 containerd[1893]: time="2025-01-13T20:47:11.464881174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-p4v68,Uid:d0e18256-5ab2-44bb-8867-03771b5ef763,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d006a8b906687d6d1c027da15a49ed878ce5c61e16d3e32316413358829d518\"" Jan 13 20:47:11.500688 containerd[1893]: time="2025-01-13T20:47:11.500260980Z" level=info msg="CreateContainer within sandbox \"6d006a8b906687d6d1c027da15a49ed878ce5c61e16d3e32316413358829d518\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:47:11.569035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1045997989.mount: Deactivated successfully. Jan 13 20:47:11.576863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2615678492.mount: Deactivated successfully. Jan 13 20:47:11.586322 containerd[1893]: time="2025-01-13T20:47:11.586268865Z" level=info msg="CreateContainer within sandbox \"6d006a8b906687d6d1c027da15a49ed878ce5c61e16d3e32316413358829d518\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c18daa1344029de267d06c8133b7cdce04ac30d25241ea75e304b49a0a390510\"" Jan 13 20:47:11.587278 containerd[1893]: time="2025-01-13T20:47:11.587241054Z" level=info msg="StartContainer for \"c18daa1344029de267d06c8133b7cdce04ac30d25241ea75e304b49a0a390510\"" Jan 13 20:47:11.589576 containerd[1893]: time="2025-01-13T20:47:11.589524930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2kg7z,Uid:65db7c41-5ea2-4551-b363-3b243862d863,Namespace:kube-system,Attempt:0,} returns sandbox id \"0aef10dfd2256b66427104d6b82021e960fd3abc3f3a5004379890ac1b699da9\"" Jan 13 20:47:11.596767 containerd[1893]: time="2025-01-13T20:47:11.596659066Z" level=info msg="CreateContainer within sandbox \"0aef10dfd2256b66427104d6b82021e960fd3abc3f3a5004379890ac1b699da9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:47:11.634196 containerd[1893]: time="2025-01-13T20:47:11.633676409Z" level=info msg="CreateContainer within sandbox \"0aef10dfd2256b66427104d6b82021e960fd3abc3f3a5004379890ac1b699da9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fdc3e9d888e009358c66beb7df4bd90d63cd956010e2479f2a249fbdb998e2dd\"" Jan 13 20:47:11.640939 containerd[1893]: time="2025-01-13T20:47:11.639752174Z" level=info msg="StartContainer for \"fdc3e9d888e009358c66beb7df4bd90d63cd956010e2479f2a249fbdb998e2dd\"" Jan 13 20:47:11.681242 sshd[4674]: Accepted publickey for core from 139.178.89.65 port 49962 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:47:11.683595 systemd[1]: Started cri-containerd-c18daa1344029de267d06c8133b7cdce04ac30d25241ea75e304b49a0a390510.scope - libcontainer container c18daa1344029de267d06c8133b7cdce04ac30d25241ea75e304b49a0a390510. Jan 13 20:47:11.685972 sshd-session[4674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:11.700024 systemd-logind[1872]: New session 11 of user core. Jan 13 20:47:11.704698 systemd[1]: Started cri-containerd-fdc3e9d888e009358c66beb7df4bd90d63cd956010e2479f2a249fbdb998e2dd.scope - libcontainer container fdc3e9d888e009358c66beb7df4bd90d63cd956010e2479f2a249fbdb998e2dd. Jan 13 20:47:11.707383 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:47:11.763877 containerd[1893]: time="2025-01-13T20:47:11.762942718Z" level=info msg="StartContainer for \"c18daa1344029de267d06c8133b7cdce04ac30d25241ea75e304b49a0a390510\" returns successfully" Jan 13 20:47:11.775180 containerd[1893]: time="2025-01-13T20:47:11.775005418Z" level=info msg="StartContainer for \"fdc3e9d888e009358c66beb7df4bd90d63cd956010e2479f2a249fbdb998e2dd\" returns successfully" Jan 13 20:47:11.955071 kubelet[3400]: I0113 20:47:11.953923 3400 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-2kg7z" podStartSLOduration=41.953872531 podStartE2EDuration="41.953872531s" podCreationTimestamp="2025-01-13 20:46:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:47:11.920379515 +0000 UTC m=+54.532988372" watchObservedRunningTime="2025-01-13 20:47:11.953872531 +0000 UTC m=+54.566481391" Jan 13 20:47:12.016266 sshd[4739]: Connection closed by 139.178.89.65 port 49962 Jan 13 20:47:12.018687 sshd-session[4674]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:12.025443 systemd[1]: sshd@10-172.31.17.145:22-139.178.89.65:49962.service: Deactivated successfully. Jan 13 20:47:12.032254 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:47:12.038431 systemd-logind[1872]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:47:12.048989 systemd-logind[1872]: Removed session 11. Jan 13 20:47:12.953626 kubelet[3400]: I0113 20:47:12.953522 3400 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-p4v68" podStartSLOduration=42.953448251 podStartE2EDuration="42.953448251s" podCreationTimestamp="2025-01-13 20:46:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:47:11.959881458 +0000 UTC m=+54.572490317" watchObservedRunningTime="2025-01-13 20:47:12.953448251 +0000 UTC m=+55.566057108" Jan 13 20:47:17.055658 systemd[1]: Started sshd@11-172.31.17.145:22-139.178.89.65:49968.service - OpenSSH per-connection server daemon (139.178.89.65:49968). Jan 13 20:47:17.326352 sshd[4792]: Accepted publickey for core from 139.178.89.65 port 49968 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:47:17.328817 sshd-session[4792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:17.339902 systemd-logind[1872]: New session 12 of user core. Jan 13 20:47:17.345635 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:47:17.681704 sshd[4794]: Connection closed by 139.178.89.65 port 49968 Jan 13 20:47:17.683294 sshd-session[4792]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:17.688444 systemd[1]: sshd@11-172.31.17.145:22-139.178.89.65:49968.service: Deactivated successfully. Jan 13 20:47:17.693674 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:47:17.698955 systemd-logind[1872]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:47:17.707176 systemd-logind[1872]: Removed session 12. Jan 13 20:47:22.719825 systemd[1]: Started sshd@12-172.31.17.145:22-139.178.89.65:46304.service - OpenSSH per-connection server daemon (139.178.89.65:46304). Jan 13 20:47:22.906496 sshd[4808]: Accepted publickey for core from 139.178.89.65 port 46304 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:47:22.910738 sshd-session[4808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:22.925772 systemd-logind[1872]: New session 13 of user core. Jan 13 20:47:22.942513 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:47:23.176438 sshd[4810]: Connection closed by 139.178.89.65 port 46304 Jan 13 20:47:23.177363 sshd-session[4808]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:23.182806 systemd-logind[1872]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:47:23.183895 systemd[1]: sshd@12-172.31.17.145:22-139.178.89.65:46304.service: Deactivated successfully. Jan 13 20:47:23.188511 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:47:23.190044 systemd-logind[1872]: Removed session 13. Jan 13 20:47:23.226056 systemd[1]: Started sshd@13-172.31.17.145:22-139.178.89.65:46312.service - OpenSSH per-connection server daemon (139.178.89.65:46312). Jan 13 20:47:23.422169 sshd[4822]: Accepted publickey for core from 139.178.89.65 port 46312 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:47:23.425656 sshd-session[4822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:23.439516 systemd-logind[1872]: New session 14 of user core. Jan 13 20:47:23.445614 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:47:23.749637 sshd[4824]: Connection closed by 139.178.89.65 port 46312 Jan 13 20:47:23.750922 sshd-session[4822]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:23.759805 systemd[1]: sshd@13-172.31.17.145:22-139.178.89.65:46312.service: Deactivated successfully. Jan 13 20:47:23.761461 systemd-logind[1872]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:47:23.765003 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:47:23.770205 systemd-logind[1872]: Removed session 14. Jan 13 20:47:23.787510 systemd[1]: Started sshd@14-172.31.17.145:22-139.178.89.65:46324.service - OpenSSH per-connection server daemon (139.178.89.65:46324). Jan 13 20:47:23.962651 sshd[4833]: Accepted publickey for core from 139.178.89.65 port 46324 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:47:23.966483 sshd-session[4833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:23.974163 systemd-logind[1872]: New session 15 of user core. Jan 13 20:47:23.979421 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:47:24.183718 sshd[4835]: Connection closed by 139.178.89.65 port 46324 Jan 13 20:47:24.185446 sshd-session[4833]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:24.189601 systemd[1]: sshd@14-172.31.17.145:22-139.178.89.65:46324.service: Deactivated successfully. Jan 13 20:47:24.192044 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:47:24.194239 systemd-logind[1872]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:47:24.195686 systemd-logind[1872]: Removed session 15. Jan 13 20:47:29.218554 systemd[1]: Started sshd@15-172.31.17.145:22-139.178.89.65:46334.service - OpenSSH per-connection server daemon (139.178.89.65:46334). Jan 13 20:47:29.390076 sshd[4847]: Accepted publickey for core from 139.178.89.65 port 46334 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:47:29.391731 sshd-session[4847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:29.402992 systemd-logind[1872]: New session 16 of user core. Jan 13 20:47:29.412801 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:47:29.729592 sshd[4849]: Connection closed by 139.178.89.65 port 46334 Jan 13 20:47:29.731425 sshd-session[4847]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:29.740612 systemd-logind[1872]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:47:29.741829 systemd[1]: sshd@15-172.31.17.145:22-139.178.89.65:46334.service: Deactivated successfully. Jan 13 20:47:29.744756 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:47:29.746215 systemd-logind[1872]: Removed session 16. Jan 13 20:47:34.764605 systemd[1]: Started sshd@16-172.31.17.145:22-139.178.89.65:36036.service - OpenSSH per-connection server daemon (139.178.89.65:36036). Jan 13 20:47:34.956975 sshd[4865]: Accepted publickey for core from 139.178.89.65 port 36036 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:47:34.957649 sshd-session[4865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:34.969164 systemd-logind[1872]: New session 17 of user core. Jan 13 20:47:34.974440 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:47:35.222820 sshd[4867]: Connection closed by 139.178.89.65 port 36036 Jan 13 20:47:35.222061 sshd-session[4865]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:35.225969 systemd[1]: sshd@16-172.31.17.145:22-139.178.89.65:36036.service: Deactivated successfully. Jan 13 20:47:35.228825 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:47:35.231071 systemd-logind[1872]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:47:35.233246 systemd-logind[1872]: Removed session 17. Jan 13 20:47:40.262856 systemd[1]: Started sshd@17-172.31.17.145:22-139.178.89.65:36050.service - OpenSSH per-connection server daemon (139.178.89.65:36050). Jan 13 20:47:40.454241 sshd[4880]: Accepted publickey for core from 139.178.89.65 port 36050 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:47:40.456045 sshd-session[4880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:40.462675 systemd-logind[1872]: New session 18 of user core. Jan 13 20:47:40.468523 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 20:47:40.760073 sshd[4882]: Connection closed by 139.178.89.65 port 36050 Jan 13 20:47:40.760795 sshd-session[4880]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:40.765761 systemd-logind[1872]: Session 18 logged out. Waiting for processes to exit. Jan 13 20:47:40.767082 systemd[1]: sshd@17-172.31.17.145:22-139.178.89.65:36050.service: Deactivated successfully. Jan 13 20:47:40.770688 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 20:47:40.777088 systemd-logind[1872]: Removed session 18. Jan 13 20:47:40.797555 systemd[1]: Started sshd@18-172.31.17.145:22-139.178.89.65:36062.service - OpenSSH per-connection server daemon (139.178.89.65:36062). Jan 13 20:47:40.978401 sshd[4894]: Accepted publickey for core from 139.178.89.65 port 36062 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:47:40.979989 sshd-session[4894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:40.987243 systemd-logind[1872]: New session 19 of user core. Jan 13 20:47:40.991405 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 20:47:41.729405 sshd[4896]: Connection closed by 139.178.89.65 port 36062 Jan 13 20:47:41.731592 sshd-session[4894]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:41.735242 systemd[1]: sshd@18-172.31.17.145:22-139.178.89.65:36062.service: Deactivated successfully. Jan 13 20:47:41.739080 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 20:47:41.742428 systemd-logind[1872]: Session 19 logged out. Waiting for processes to exit. Jan 13 20:47:41.744295 systemd-logind[1872]: Removed session 19. Jan 13 20:47:41.775970 systemd[1]: Started sshd@19-172.31.17.145:22-139.178.89.65:52030.service - OpenSSH per-connection server daemon (139.178.89.65:52030). Jan 13 20:47:42.001665 sshd[4905]: Accepted publickey for core from 139.178.89.65 port 52030 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:47:42.007562 sshd-session[4905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:42.030233 systemd-logind[1872]: New session 20 of user core. Jan 13 20:47:42.041410 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 20:47:44.327707 sshd[4907]: Connection closed by 139.178.89.65 port 52030 Jan 13 20:47:44.328510 sshd-session[4905]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:44.345832 systemd[1]: sshd@19-172.31.17.145:22-139.178.89.65:52030.service: Deactivated successfully. Jan 13 20:47:44.349577 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 20:47:44.352510 systemd-logind[1872]: Session 20 logged out. Waiting for processes to exit. Jan 13 20:47:44.380353 systemd[1]: Started sshd@20-172.31.17.145:22-139.178.89.65:52044.service - OpenSSH per-connection server daemon (139.178.89.65:52044). Jan 13 20:47:44.384959 systemd-logind[1872]: Removed session 20. Jan 13 20:47:44.580625 sshd[4923]: Accepted publickey for core from 139.178.89.65 port 52044 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:47:44.582681 sshd-session[4923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:44.592308 systemd-logind[1872]: New session 21 of user core. Jan 13 20:47:44.596485 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 20:47:45.109451 sshd[4925]: Connection closed by 139.178.89.65 port 52044 Jan 13 20:47:45.116585 sshd-session[4923]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:45.165299 systemd[1]: sshd@20-172.31.17.145:22-139.178.89.65:52044.service: Deactivated successfully. Jan 13 20:47:45.171760 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 20:47:45.176771 systemd-logind[1872]: Session 21 logged out. Waiting for processes to exit. Jan 13 20:47:45.185032 systemd[1]: Started sshd@21-172.31.17.145:22-139.178.89.65:52046.service - OpenSSH per-connection server daemon (139.178.89.65:52046). Jan 13 20:47:45.186905 systemd-logind[1872]: Removed session 21. Jan 13 20:47:45.356036 sshd[4934]: Accepted publickey for core from 139.178.89.65 port 52046 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:47:45.359067 sshd-session[4934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:45.364861 systemd-logind[1872]: New session 22 of user core. Jan 13 20:47:45.370862 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 20:47:45.576764 sshd[4936]: Connection closed by 139.178.89.65 port 52046 Jan 13 20:47:45.578638 sshd-session[4934]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:45.582886 systemd[1]: sshd@21-172.31.17.145:22-139.178.89.65:52046.service: Deactivated successfully. Jan 13 20:47:45.587355 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 20:47:45.588437 systemd-logind[1872]: Session 22 logged out. Waiting for processes to exit. Jan 13 20:47:45.589623 systemd-logind[1872]: Removed session 22. Jan 13 20:47:50.648654 systemd[1]: Started sshd@22-172.31.17.145:22-139.178.89.65:52062.service - OpenSSH per-connection server daemon (139.178.89.65:52062). Jan 13 20:47:50.832015 sshd[4947]: Accepted publickey for core from 139.178.89.65 port 52062 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:47:50.833801 sshd-session[4947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:50.840406 systemd-logind[1872]: New session 23 of user core. Jan 13 20:47:50.844383 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 20:47:51.144384 sshd[4949]: Connection closed by 139.178.89.65 port 52062 Jan 13 20:47:51.146714 sshd-session[4947]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:51.151906 systemd[1]: sshd@22-172.31.17.145:22-139.178.89.65:52062.service: Deactivated successfully. Jan 13 20:47:51.154664 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 20:47:51.157561 systemd-logind[1872]: Session 23 logged out. Waiting for processes to exit. Jan 13 20:47:51.159452 systemd-logind[1872]: Removed session 23. Jan 13 20:47:56.186602 systemd[1]: Started sshd@23-172.31.17.145:22-139.178.89.65:59432.service - OpenSSH per-connection server daemon (139.178.89.65:59432). Jan 13 20:47:56.417186 sshd[4963]: Accepted publickey for core from 139.178.89.65 port 59432 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:47:56.419633 sshd-session[4963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:56.430293 systemd-logind[1872]: New session 24 of user core. Jan 13 20:47:56.438544 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 20:47:56.682405 sshd[4965]: Connection closed by 139.178.89.65 port 59432 Jan 13 20:47:56.684091 sshd-session[4963]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:56.687678 systemd[1]: sshd@23-172.31.17.145:22-139.178.89.65:59432.service: Deactivated successfully. Jan 13 20:47:56.690632 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 20:47:56.692746 systemd-logind[1872]: Session 24 logged out. Waiting for processes to exit. Jan 13 20:47:56.694136 systemd-logind[1872]: Removed session 24. Jan 13 20:48:01.747104 systemd[1]: Started sshd@24-172.31.17.145:22-139.178.89.65:44096.service - OpenSSH per-connection server daemon (139.178.89.65:44096). Jan 13 20:48:02.104202 sshd[4978]: Accepted publickey for core from 139.178.89.65 port 44096 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:48:02.106863 sshd-session[4978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:48:02.126228 systemd-logind[1872]: New session 25 of user core. Jan 13 20:48:02.135494 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 20:48:02.499451 sshd[4980]: Connection closed by 139.178.89.65 port 44096 Jan 13 20:48:02.501456 sshd-session[4978]: pam_unix(sshd:session): session closed for user core Jan 13 20:48:02.509117 systemd-logind[1872]: Session 25 logged out. Waiting for processes to exit. Jan 13 20:48:02.512495 systemd[1]: sshd@24-172.31.17.145:22-139.178.89.65:44096.service: Deactivated successfully. Jan 13 20:48:02.520854 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 20:48:02.522947 systemd-logind[1872]: Removed session 25. Jan 13 20:48:07.548956 systemd[1]: Started sshd@25-172.31.17.145:22-139.178.89.65:44098.service - OpenSSH per-connection server daemon (139.178.89.65:44098). Jan 13 20:48:07.758511 sshd[4990]: Accepted publickey for core from 139.178.89.65 port 44098 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:48:07.762371 sshd-session[4990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:48:07.768401 systemd-logind[1872]: New session 26 of user core. Jan 13 20:48:07.774512 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 20:48:07.974538 sshd[4992]: Connection closed by 139.178.89.65 port 44098 Jan 13 20:48:07.975790 sshd-session[4990]: pam_unix(sshd:session): session closed for user core Jan 13 20:48:07.979929 systemd[1]: sshd@25-172.31.17.145:22-139.178.89.65:44098.service: Deactivated successfully. Jan 13 20:48:07.993057 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 20:48:08.001043 systemd-logind[1872]: Session 26 logged out. Waiting for processes to exit. Jan 13 20:48:08.021665 systemd[1]: Started sshd@26-172.31.17.145:22-139.178.89.65:44110.service - OpenSSH per-connection server daemon (139.178.89.65:44110). Jan 13 20:48:08.024321 systemd-logind[1872]: Removed session 26. Jan 13 20:48:08.183965 sshd[5002]: Accepted publickey for core from 139.178.89.65 port 44110 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:48:08.186290 sshd-session[5002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:48:08.199596 systemd-logind[1872]: New session 27 of user core. Jan 13 20:48:08.210483 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 20:48:09.815537 containerd[1893]: time="2025-01-13T20:48:09.815390693Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:48:09.834088 containerd[1893]: time="2025-01-13T20:48:09.834027407Z" level=info msg="StopContainer for \"933ce6e07dee2bc4e5d22710f3a5eab4a5cc4d7dc538f6e49f5031d4868be35f\" with timeout 30 (s)" Jan 13 20:48:09.835483 containerd[1893]: time="2025-01-13T20:48:09.835438189Z" level=info msg="StopContainer for \"ce357a251ada8d33eca18247622fba066834c8f01258ee275ad8afdde126dfba\" with timeout 2 (s)" Jan 13 20:48:09.838453 containerd[1893]: time="2025-01-13T20:48:09.838391047Z" level=info msg="Stop container \"933ce6e07dee2bc4e5d22710f3a5eab4a5cc4d7dc538f6e49f5031d4868be35f\" with signal terminated" Jan 13 20:48:09.838638 containerd[1893]: time="2025-01-13T20:48:09.838405684Z" level=info msg="Stop container \"ce357a251ada8d33eca18247622fba066834c8f01258ee275ad8afdde126dfba\" with signal terminated" Jan 13 20:48:09.858332 systemd-networkd[1729]: lxc_health: Link DOWN Jan 13 20:48:09.858343 systemd-networkd[1729]: lxc_health: Lost carrier Jan 13 20:48:09.899980 systemd[1]: cri-containerd-933ce6e07dee2bc4e5d22710f3a5eab4a5cc4d7dc538f6e49f5031d4868be35f.scope: Deactivated successfully. Jan 13 20:48:09.920374 systemd[1]: cri-containerd-ce357a251ada8d33eca18247622fba066834c8f01258ee275ad8afdde126dfba.scope: Deactivated successfully. Jan 13 20:48:09.920708 systemd[1]: cri-containerd-ce357a251ada8d33eca18247622fba066834c8f01258ee275ad8afdde126dfba.scope: Consumed 9.370s CPU time. Jan 13 20:48:09.956247 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-933ce6e07dee2bc4e5d22710f3a5eab4a5cc4d7dc538f6e49f5031d4868be35f-rootfs.mount: Deactivated successfully. Jan 13 20:48:09.967858 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce357a251ada8d33eca18247622fba066834c8f01258ee275ad8afdde126dfba-rootfs.mount: Deactivated successfully. Jan 13 20:48:09.988039 containerd[1893]: time="2025-01-13T20:48:09.987925427Z" level=info msg="shim disconnected" id=933ce6e07dee2bc4e5d22710f3a5eab4a5cc4d7dc538f6e49f5031d4868be35f namespace=k8s.io Jan 13 20:48:09.988039 containerd[1893]: time="2025-01-13T20:48:09.988002167Z" level=warning msg="cleaning up after shim disconnected" id=933ce6e07dee2bc4e5d22710f3a5eab4a5cc4d7dc538f6e49f5031d4868be35f namespace=k8s.io Jan 13 20:48:09.988039 containerd[1893]: time="2025-01-13T20:48:09.988015263Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:48:09.988665 containerd[1893]: time="2025-01-13T20:48:09.987927985Z" level=info msg="shim disconnected" id=ce357a251ada8d33eca18247622fba066834c8f01258ee275ad8afdde126dfba namespace=k8s.io Jan 13 20:48:09.988944 containerd[1893]: time="2025-01-13T20:48:09.988359401Z" level=warning msg="cleaning up after shim disconnected" id=ce357a251ada8d33eca18247622fba066834c8f01258ee275ad8afdde126dfba namespace=k8s.io Jan 13 20:48:09.988944 containerd[1893]: time="2025-01-13T20:48:09.988773421Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:48:10.023833 containerd[1893]: time="2025-01-13T20:48:10.023761781Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:48:10Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:48:10.029803 containerd[1893]: time="2025-01-13T20:48:10.029757976Z" level=info msg="StopContainer for \"933ce6e07dee2bc4e5d22710f3a5eab4a5cc4d7dc538f6e49f5031d4868be35f\" returns successfully" Jan 13 20:48:10.031680 containerd[1893]: time="2025-01-13T20:48:10.031643055Z" level=info msg="StopContainer for \"ce357a251ada8d33eca18247622fba066834c8f01258ee275ad8afdde126dfba\" returns successfully" Jan 13 20:48:10.034493 containerd[1893]: time="2025-01-13T20:48:10.034444070Z" level=info msg="StopPodSandbox for \"6c1328c6bfdfe02de34e5cd11dea9df6077711d2f135e6d60447ec72fb4b8d5c\"" Jan 13 20:48:10.035704 containerd[1893]: time="2025-01-13T20:48:10.034444117Z" level=info msg="StopPodSandbox for \"112766182759871a94929d527970ba024cab3c0db569d7b0aca61e5a6455f59b\"" Jan 13 20:48:10.049880 containerd[1893]: time="2025-01-13T20:48:10.035725535Z" level=info msg="Container to stop \"933ce6e07dee2bc4e5d22710f3a5eab4a5cc4d7dc538f6e49f5031d4868be35f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:48:10.049922 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-112766182759871a94929d527970ba024cab3c0db569d7b0aca61e5a6455f59b-shm.mount: Deactivated successfully. Jan 13 20:48:10.050484 containerd[1893]: time="2025-01-13T20:48:10.039339588Z" level=info msg="Container to stop \"cce9de87451216095eb308495639daa33e3618bf5965f959bff1ac508d25d907\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:48:10.050819 containerd[1893]: time="2025-01-13T20:48:10.050793164Z" level=info msg="Container to stop \"c1c573a678aa73481ce9faf2ea44bddca851e885452f13f3fa021b6a5c7b6357\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:48:10.050916 containerd[1893]: time="2025-01-13T20:48:10.050900357Z" level=info msg="Container to stop \"d52ac7b25b54c005815cf34d0b19e9c2efce2e365a49881c7d3682f272b1115b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:48:10.050987 containerd[1893]: time="2025-01-13T20:48:10.050971247Z" level=info msg="Container to stop \"ce357a251ada8d33eca18247622fba066834c8f01258ee275ad8afdde126dfba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:48:10.051061 containerd[1893]: time="2025-01-13T20:48:10.051047239Z" level=info msg="Container to stop \"d5e93cfa681e542cbff3e44dd06f993776935360b695745f4937767220a8d5f7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:48:10.059599 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6c1328c6bfdfe02de34e5cd11dea9df6077711d2f135e6d60447ec72fb4b8d5c-shm.mount: Deactivated successfully. Jan 13 20:48:10.066965 systemd[1]: cri-containerd-6c1328c6bfdfe02de34e5cd11dea9df6077711d2f135e6d60447ec72fb4b8d5c.scope: Deactivated successfully. Jan 13 20:48:10.083920 systemd[1]: cri-containerd-112766182759871a94929d527970ba024cab3c0db569d7b0aca61e5a6455f59b.scope: Deactivated successfully. Jan 13 20:48:10.123902 containerd[1893]: time="2025-01-13T20:48:10.123588087Z" level=info msg="shim disconnected" id=112766182759871a94929d527970ba024cab3c0db569d7b0aca61e5a6455f59b namespace=k8s.io Jan 13 20:48:10.124137 containerd[1893]: time="2025-01-13T20:48:10.123938654Z" level=warning msg="cleaning up after shim disconnected" id=112766182759871a94929d527970ba024cab3c0db569d7b0aca61e5a6455f59b namespace=k8s.io Jan 13 20:48:10.124137 containerd[1893]: time="2025-01-13T20:48:10.123952854Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:48:10.124979 containerd[1893]: time="2025-01-13T20:48:10.124932608Z" level=info msg="shim disconnected" id=6c1328c6bfdfe02de34e5cd11dea9df6077711d2f135e6d60447ec72fb4b8d5c namespace=k8s.io Jan 13 20:48:10.125411 containerd[1893]: time="2025-01-13T20:48:10.125234814Z" level=warning msg="cleaning up after shim disconnected" id=6c1328c6bfdfe02de34e5cd11dea9df6077711d2f135e6d60447ec72fb4b8d5c namespace=k8s.io Jan 13 20:48:10.125411 containerd[1893]: time="2025-01-13T20:48:10.125258593Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:48:10.155004 containerd[1893]: time="2025-01-13T20:48:10.154950121Z" level=info msg="TearDown network for sandbox \"6c1328c6bfdfe02de34e5cd11dea9df6077711d2f135e6d60447ec72fb4b8d5c\" successfully" Jan 13 20:48:10.155004 containerd[1893]: time="2025-01-13T20:48:10.154991212Z" level=info msg="StopPodSandbox for \"6c1328c6bfdfe02de34e5cd11dea9df6077711d2f135e6d60447ec72fb4b8d5c\" returns successfully" Jan 13 20:48:10.155832 containerd[1893]: time="2025-01-13T20:48:10.155294960Z" level=info msg="TearDown network for sandbox \"112766182759871a94929d527970ba024cab3c0db569d7b0aca61e5a6455f59b\" successfully" Jan 13 20:48:10.155832 containerd[1893]: time="2025-01-13T20:48:10.155656571Z" level=info msg="StopPodSandbox for \"112766182759871a94929d527970ba024cab3c0db569d7b0aca61e5a6455f59b\" returns successfully" Jan 13 20:48:10.273822 kubelet[3400]: I0113 20:48:10.273773 3400 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-cni-path\") pod \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\" (UID: \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\") " Jan 13 20:48:10.273822 kubelet[3400]: I0113 20:48:10.273833 3400 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-xtables-lock\") pod \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\" (UID: \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\") " Jan 13 20:48:10.274467 kubelet[3400]: I0113 20:48:10.273861 3400 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-bpf-maps\") pod \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\" (UID: \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\") " Jan 13 20:48:10.274467 kubelet[3400]: I0113 20:48:10.273891 3400 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-hubble-tls\") pod \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\" (UID: \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\") " Jan 13 20:48:10.274467 kubelet[3400]: I0113 20:48:10.273922 3400 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6cd00c41-6ab1-4131-9007-a186b552cd12-cilium-config-path\") pod \"6cd00c41-6ab1-4131-9007-a186b552cd12\" (UID: \"6cd00c41-6ab1-4131-9007-a186b552cd12\") " Jan 13 20:48:10.274467 kubelet[3400]: I0113 20:48:10.273956 3400 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-clustermesh-secrets\") pod \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\" (UID: \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\") " Jan 13 20:48:10.274467 kubelet[3400]: I0113 20:48:10.273992 3400 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dzk2\" (UniqueName: \"kubernetes.io/projected/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-kube-api-access-5dzk2\") pod \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\" (UID: \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\") " Jan 13 20:48:10.274467 kubelet[3400]: I0113 20:48:10.274018 3400 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-host-proc-sys-kernel\") pod \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\" (UID: \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\") " Jan 13 20:48:10.274833 kubelet[3400]: I0113 20:48:10.274042 3400 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-etc-cni-netd\") pod \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\" (UID: \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\") " Jan 13 20:48:10.274833 kubelet[3400]: I0113 20:48:10.274073 3400 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4t7w5\" (UniqueName: \"kubernetes.io/projected/6cd00c41-6ab1-4131-9007-a186b552cd12-kube-api-access-4t7w5\") pod \"6cd00c41-6ab1-4131-9007-a186b552cd12\" (UID: \"6cd00c41-6ab1-4131-9007-a186b552cd12\") " Jan 13 20:48:10.274833 kubelet[3400]: I0113 20:48:10.274105 3400 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-cilium-cgroup\") pod \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\" (UID: \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\") " Jan 13 20:48:10.274833 kubelet[3400]: I0113 20:48:10.274132 3400 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-cilium-run\") pod \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\" (UID: \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\") " Jan 13 20:48:10.274833 kubelet[3400]: I0113 20:48:10.274179 3400 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-host-proc-sys-net\") pod \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\" (UID: \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\") " Jan 13 20:48:10.274833 kubelet[3400]: I0113 20:48:10.274206 3400 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-hostproc\") pod \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\" (UID: \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\") " Jan 13 20:48:10.275113 kubelet[3400]: I0113 20:48:10.274315 3400 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-lib-modules\") pod \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\" (UID: \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\") " Jan 13 20:48:10.275113 kubelet[3400]: I0113 20:48:10.274351 3400 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-cilium-config-path\") pod \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\" (UID: \"4bf57b2b-89cf-4644-80ad-fd6dab3040eb\") " Jan 13 20:48:10.277097 kubelet[3400]: I0113 20:48:10.277051 3400 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4bf57b2b-89cf-4644-80ad-fd6dab3040eb" (UID: "4bf57b2b-89cf-4644-80ad-fd6dab3040eb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:48:10.277248 kubelet[3400]: I0113 20:48:10.277128 3400 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4bf57b2b-89cf-4644-80ad-fd6dab3040eb" (UID: "4bf57b2b-89cf-4644-80ad-fd6dab3040eb"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:48:10.279064 kubelet[3400]: I0113 20:48:10.275201 3400 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4bf57b2b-89cf-4644-80ad-fd6dab3040eb" (UID: "4bf57b2b-89cf-4644-80ad-fd6dab3040eb"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:48:10.279064 kubelet[3400]: I0113 20:48:10.278862 3400 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-cni-path" (OuterVolumeSpecName: "cni-path") pod "4bf57b2b-89cf-4644-80ad-fd6dab3040eb" (UID: "4bf57b2b-89cf-4644-80ad-fd6dab3040eb"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:48:10.279064 kubelet[3400]: I0113 20:48:10.278889 3400 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4bf57b2b-89cf-4644-80ad-fd6dab3040eb" (UID: "4bf57b2b-89cf-4644-80ad-fd6dab3040eb"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:48:10.279064 kubelet[3400]: I0113 20:48:10.278914 3400 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4bf57b2b-89cf-4644-80ad-fd6dab3040eb" (UID: "4bf57b2b-89cf-4644-80ad-fd6dab3040eb"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:48:10.289865 kubelet[3400]: I0113 20:48:10.289814 3400 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4bf57b2b-89cf-4644-80ad-fd6dab3040eb" (UID: "4bf57b2b-89cf-4644-80ad-fd6dab3040eb"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:48:10.291329 kubelet[3400]: I0113 20:48:10.291289 3400 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cd00c41-6ab1-4131-9007-a186b552cd12-kube-api-access-4t7w5" (OuterVolumeSpecName: "kube-api-access-4t7w5") pod "6cd00c41-6ab1-4131-9007-a186b552cd12" (UID: "6cd00c41-6ab1-4131-9007-a186b552cd12"). InnerVolumeSpecName "kube-api-access-4t7w5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:48:10.291428 kubelet[3400]: I0113 20:48:10.291343 3400 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4bf57b2b-89cf-4644-80ad-fd6dab3040eb" (UID: "4bf57b2b-89cf-4644-80ad-fd6dab3040eb"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:48:10.291428 kubelet[3400]: I0113 20:48:10.291367 3400 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4bf57b2b-89cf-4644-80ad-fd6dab3040eb" (UID: "4bf57b2b-89cf-4644-80ad-fd6dab3040eb"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:48:10.291428 kubelet[3400]: I0113 20:48:10.291394 3400 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4bf57b2b-89cf-4644-80ad-fd6dab3040eb" (UID: "4bf57b2b-89cf-4644-80ad-fd6dab3040eb"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:48:10.291428 kubelet[3400]: I0113 20:48:10.291419 3400 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-hostproc" (OuterVolumeSpecName: "hostproc") pod "4bf57b2b-89cf-4644-80ad-fd6dab3040eb" (UID: "4bf57b2b-89cf-4644-80ad-fd6dab3040eb"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:48:10.291624 kubelet[3400]: I0113 20:48:10.291443 3400 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4bf57b2b-89cf-4644-80ad-fd6dab3040eb" (UID: "4bf57b2b-89cf-4644-80ad-fd6dab3040eb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:48:10.293175 kubelet[3400]: I0113 20:48:10.292975 3400 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cd00c41-6ab1-4131-9007-a186b552cd12-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6cd00c41-6ab1-4131-9007-a186b552cd12" (UID: "6cd00c41-6ab1-4131-9007-a186b552cd12"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:48:10.295512 kubelet[3400]: I0113 20:48:10.295472 3400 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4bf57b2b-89cf-4644-80ad-fd6dab3040eb" (UID: "4bf57b2b-89cf-4644-80ad-fd6dab3040eb"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 20:48:10.296118 kubelet[3400]: I0113 20:48:10.296086 3400 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-kube-api-access-5dzk2" (OuterVolumeSpecName: "kube-api-access-5dzk2") pod "4bf57b2b-89cf-4644-80ad-fd6dab3040eb" (UID: "4bf57b2b-89cf-4644-80ad-fd6dab3040eb"). InnerVolumeSpecName "kube-api-access-5dzk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:48:10.375983 kubelet[3400]: I0113 20:48:10.375746 3400 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-hubble-tls\") on node \"ip-172-31-17-145\" DevicePath \"\"" Jan 13 20:48:10.375983 kubelet[3400]: I0113 20:48:10.375904 3400 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6cd00c41-6ab1-4131-9007-a186b552cd12-cilium-config-path\") on node \"ip-172-31-17-145\" DevicePath \"\"" Jan 13 20:48:10.375983 kubelet[3400]: I0113 20:48:10.375923 3400 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-clustermesh-secrets\") on node \"ip-172-31-17-145\" DevicePath \"\"" Jan 13 20:48:10.375983 kubelet[3400]: I0113 20:48:10.375958 3400 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5dzk2\" (UniqueName: \"kubernetes.io/projected/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-kube-api-access-5dzk2\") on node \"ip-172-31-17-145\" DevicePath \"\"" Jan 13 20:48:10.375983 kubelet[3400]: I0113 20:48:10.375972 3400 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-bpf-maps\") on node \"ip-172-31-17-145\" DevicePath \"\"" Jan 13 20:48:10.375983 kubelet[3400]: I0113 20:48:10.375988 3400 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-host-proc-sys-kernel\") on node \"ip-172-31-17-145\" DevicePath \"\"" Jan 13 20:48:10.376369 kubelet[3400]: I0113 20:48:10.376004 3400 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4t7w5\" (UniqueName: \"kubernetes.io/projected/6cd00c41-6ab1-4131-9007-a186b552cd12-kube-api-access-4t7w5\") on node \"ip-172-31-17-145\" DevicePath \"\"" Jan 13 20:48:10.376369 kubelet[3400]: I0113 20:48:10.376016 3400 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-cilium-cgroup\") on node \"ip-172-31-17-145\" DevicePath \"\"" Jan 13 20:48:10.376369 kubelet[3400]: I0113 20:48:10.376029 3400 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-etc-cni-netd\") on node \"ip-172-31-17-145\" DevicePath \"\"" Jan 13 20:48:10.376369 kubelet[3400]: I0113 20:48:10.376042 3400 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-host-proc-sys-net\") on node \"ip-172-31-17-145\" DevicePath \"\"" Jan 13 20:48:10.376369 kubelet[3400]: I0113 20:48:10.376055 3400 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-cilium-run\") on node \"ip-172-31-17-145\" DevicePath \"\"" Jan 13 20:48:10.376369 kubelet[3400]: I0113 20:48:10.376067 3400 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-hostproc\") on node \"ip-172-31-17-145\" DevicePath \"\"" Jan 13 20:48:10.376369 kubelet[3400]: I0113 20:48:10.376081 3400 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-cilium-config-path\") on node \"ip-172-31-17-145\" DevicePath \"\"" Jan 13 20:48:10.376369 kubelet[3400]: I0113 20:48:10.376094 3400 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-lib-modules\") on node \"ip-172-31-17-145\" DevicePath \"\"" Jan 13 20:48:10.376639 kubelet[3400]: I0113 20:48:10.376106 3400 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-cni-path\") on node \"ip-172-31-17-145\" DevicePath \"\"" Jan 13 20:48:10.376639 kubelet[3400]: I0113 20:48:10.376120 3400 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4bf57b2b-89cf-4644-80ad-fd6dab3040eb-xtables-lock\") on node \"ip-172-31-17-145\" DevicePath \"\"" Jan 13 20:48:10.782104 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-112766182759871a94929d527970ba024cab3c0db569d7b0aca61e5a6455f59b-rootfs.mount: Deactivated successfully. Jan 13 20:48:10.782241 systemd[1]: var-lib-kubelet-pods-6cd00c41\x2d6ab1\x2d4131\x2d9007\x2da186b552cd12-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4t7w5.mount: Deactivated successfully. Jan 13 20:48:10.782372 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c1328c6bfdfe02de34e5cd11dea9df6077711d2f135e6d60447ec72fb4b8d5c-rootfs.mount: Deactivated successfully. Jan 13 20:48:10.782450 systemd[1]: var-lib-kubelet-pods-4bf57b2b\x2d89cf\x2d4644\x2d80ad\x2dfd6dab3040eb-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 20:48:10.782563 systemd[1]: var-lib-kubelet-pods-4bf57b2b\x2d89cf\x2d4644\x2d80ad\x2dfd6dab3040eb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5dzk2.mount: Deactivated successfully. Jan 13 20:48:10.782657 systemd[1]: var-lib-kubelet-pods-4bf57b2b\x2d89cf\x2d4644\x2d80ad\x2dfd6dab3040eb-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 20:48:11.160122 systemd[1]: Removed slice kubepods-besteffort-pod6cd00c41_6ab1_4131_9007_a186b552cd12.slice - libcontainer container kubepods-besteffort-pod6cd00c41_6ab1_4131_9007_a186b552cd12.slice. Jan 13 20:48:11.200707 kubelet[3400]: I0113 20:48:11.200659 3400 scope.go:117] "RemoveContainer" containerID="933ce6e07dee2bc4e5d22710f3a5eab4a5cc4d7dc538f6e49f5031d4868be35f" Jan 13 20:48:11.232856 systemd[1]: Removed slice kubepods-burstable-pod4bf57b2b_89cf_4644_80ad_fd6dab3040eb.slice - libcontainer container kubepods-burstable-pod4bf57b2b_89cf_4644_80ad_fd6dab3040eb.slice. Jan 13 20:48:11.233120 systemd[1]: kubepods-burstable-pod4bf57b2b_89cf_4644_80ad_fd6dab3040eb.slice: Consumed 9.464s CPU time. Jan 13 20:48:11.241915 containerd[1893]: time="2025-01-13T20:48:11.241861886Z" level=info msg="RemoveContainer for \"933ce6e07dee2bc4e5d22710f3a5eab4a5cc4d7dc538f6e49f5031d4868be35f\"" Jan 13 20:48:11.255411 containerd[1893]: time="2025-01-13T20:48:11.255218042Z" level=info msg="RemoveContainer for \"933ce6e07dee2bc4e5d22710f3a5eab4a5cc4d7dc538f6e49f5031d4868be35f\" returns successfully" Jan 13 20:48:11.255730 kubelet[3400]: I0113 20:48:11.255695 3400 scope.go:117] "RemoveContainer" containerID="ce357a251ada8d33eca18247622fba066834c8f01258ee275ad8afdde126dfba" Jan 13 20:48:11.257905 containerd[1893]: time="2025-01-13T20:48:11.257774105Z" level=info msg="RemoveContainer for \"ce357a251ada8d33eca18247622fba066834c8f01258ee275ad8afdde126dfba\"" Jan 13 20:48:11.267419 containerd[1893]: time="2025-01-13T20:48:11.267371061Z" level=info msg="RemoveContainer for \"ce357a251ada8d33eca18247622fba066834c8f01258ee275ad8afdde126dfba\" returns successfully" Jan 13 20:48:11.267779 kubelet[3400]: I0113 20:48:11.267746 3400 scope.go:117] "RemoveContainer" containerID="d52ac7b25b54c005815cf34d0b19e9c2efce2e365a49881c7d3682f272b1115b" Jan 13 20:48:11.284238 containerd[1893]: time="2025-01-13T20:48:11.279849946Z" level=info msg="RemoveContainer for \"d52ac7b25b54c005815cf34d0b19e9c2efce2e365a49881c7d3682f272b1115b\"" Jan 13 20:48:11.295460 containerd[1893]: time="2025-01-13T20:48:11.295351785Z" level=info msg="RemoveContainer for \"d52ac7b25b54c005815cf34d0b19e9c2efce2e365a49881c7d3682f272b1115b\" returns successfully" Jan 13 20:48:11.296019 kubelet[3400]: I0113 20:48:11.295918 3400 scope.go:117] "RemoveContainer" containerID="d5e93cfa681e542cbff3e44dd06f993776935360b695745f4937767220a8d5f7" Jan 13 20:48:11.297847 containerd[1893]: time="2025-01-13T20:48:11.297809079Z" level=info msg="RemoveContainer for \"d5e93cfa681e542cbff3e44dd06f993776935360b695745f4937767220a8d5f7\"" Jan 13 20:48:11.336117 containerd[1893]: time="2025-01-13T20:48:11.336070442Z" level=info msg="RemoveContainer for \"d5e93cfa681e542cbff3e44dd06f993776935360b695745f4937767220a8d5f7\" returns successfully" Jan 13 20:48:11.336580 kubelet[3400]: I0113 20:48:11.336447 3400 scope.go:117] "RemoveContainer" containerID="c1c573a678aa73481ce9faf2ea44bddca851e885452f13f3fa021b6a5c7b6357" Jan 13 20:48:11.337879 containerd[1893]: time="2025-01-13T20:48:11.337820004Z" level=info msg="RemoveContainer for \"c1c573a678aa73481ce9faf2ea44bddca851e885452f13f3fa021b6a5c7b6357\"" Jan 13 20:48:11.370564 containerd[1893]: time="2025-01-13T20:48:11.370476098Z" level=info msg="RemoveContainer for \"c1c573a678aa73481ce9faf2ea44bddca851e885452f13f3fa021b6a5c7b6357\" returns successfully" Jan 13 20:48:11.370918 kubelet[3400]: I0113 20:48:11.370898 3400 scope.go:117] "RemoveContainer" containerID="cce9de87451216095eb308495639daa33e3618bf5965f959bff1ac508d25d907" Jan 13 20:48:11.372686 containerd[1893]: time="2025-01-13T20:48:11.372648641Z" level=info msg="RemoveContainer for \"cce9de87451216095eb308495639daa33e3618bf5965f959bff1ac508d25d907\"" Jan 13 20:48:11.412086 containerd[1893]: time="2025-01-13T20:48:11.411866150Z" level=info msg="RemoveContainer for \"cce9de87451216095eb308495639daa33e3618bf5965f959bff1ac508d25d907\" returns successfully" Jan 13 20:48:11.622045 kubelet[3400]: I0113 20:48:11.622002 3400 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4bf57b2b-89cf-4644-80ad-fd6dab3040eb" path="/var/lib/kubelet/pods/4bf57b2b-89cf-4644-80ad-fd6dab3040eb/volumes" Jan 13 20:48:11.623638 kubelet[3400]: I0113 20:48:11.623232 3400 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6cd00c41-6ab1-4131-9007-a186b552cd12" path="/var/lib/kubelet/pods/6cd00c41-6ab1-4131-9007-a186b552cd12/volumes" Jan 13 20:48:11.663248 sshd[5004]: Connection closed by 139.178.89.65 port 44110 Jan 13 20:48:11.662073 sshd-session[5002]: pam_unix(sshd:session): session closed for user core Jan 13 20:48:11.693410 systemd[1]: sshd@26-172.31.17.145:22-139.178.89.65:44110.service: Deactivated successfully. Jan 13 20:48:11.698608 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 20:48:11.705350 systemd-logind[1872]: Session 27 logged out. Waiting for processes to exit. Jan 13 20:48:11.712657 systemd[1]: Started sshd@27-172.31.17.145:22-139.178.89.65:50286.service - OpenSSH per-connection server daemon (139.178.89.65:50286). Jan 13 20:48:11.716583 systemd-logind[1872]: Removed session 27. Jan 13 20:48:11.950988 sshd[5164]: Accepted publickey for core from 139.178.89.65 port 50286 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:48:11.952440 sshd-session[5164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:48:11.961232 systemd-logind[1872]: New session 28 of user core. Jan 13 20:48:11.966359 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 13 20:48:12.405774 ntpd[1867]: Deleting interface #12 lxc_health, fe80::c861:1dff:fe5e:f743%8#123, interface stats: received=0, sent=0, dropped=0, active_time=64 secs Jan 13 20:48:12.407100 ntpd[1867]: 13 Jan 20:48:12 ntpd[1867]: Deleting interface #12 lxc_health, fe80::c861:1dff:fe5e:f743%8#123, interface stats: received=0, sent=0, dropped=0, active_time=64 secs Jan 13 20:48:12.546235 sshd[5166]: Connection closed by 139.178.89.65 port 50286 Jan 13 20:48:12.545598 sshd-session[5164]: pam_unix(sshd:session): session closed for user core Jan 13 20:48:12.552932 systemd-logind[1872]: Session 28 logged out. Waiting for processes to exit. Jan 13 20:48:12.553045 systemd[1]: sshd@27-172.31.17.145:22-139.178.89.65:50286.service: Deactivated successfully. Jan 13 20:48:12.558817 systemd[1]: session-28.scope: Deactivated successfully. Jan 13 20:48:12.563214 systemd-logind[1872]: Removed session 28. Jan 13 20:48:12.570978 kubelet[3400]: I0113 20:48:12.570405 3400 topology_manager.go:215] "Topology Admit Handler" podUID="9d9bbd3e-031b-4c00-90de-d67d47649a37" podNamespace="kube-system" podName="cilium-52z7c" Jan 13 20:48:12.576220 kubelet[3400]: E0113 20:48:12.575814 3400 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4bf57b2b-89cf-4644-80ad-fd6dab3040eb" containerName="mount-bpf-fs" Jan 13 20:48:12.576220 kubelet[3400]: E0113 20:48:12.575845 3400 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4bf57b2b-89cf-4644-80ad-fd6dab3040eb" containerName="cilium-agent" Jan 13 20:48:12.576220 kubelet[3400]: E0113 20:48:12.575856 3400 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4bf57b2b-89cf-4644-80ad-fd6dab3040eb" containerName="mount-cgroup" Jan 13 20:48:12.576220 kubelet[3400]: E0113 20:48:12.575881 3400 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4bf57b2b-89cf-4644-80ad-fd6dab3040eb" containerName="apply-sysctl-overwrites" Jan 13 20:48:12.576220 kubelet[3400]: E0113 20:48:12.575893 3400 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4bf57b2b-89cf-4644-80ad-fd6dab3040eb" containerName="clean-cilium-state" Jan 13 20:48:12.576220 kubelet[3400]: E0113 20:48:12.575904 3400 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6cd00c41-6ab1-4131-9007-a186b552cd12" containerName="cilium-operator" Jan 13 20:48:12.580740 kubelet[3400]: I0113 20:48:12.578790 3400 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf57b2b-89cf-4644-80ad-fd6dab3040eb" containerName="cilium-agent" Jan 13 20:48:12.580740 kubelet[3400]: I0113 20:48:12.578835 3400 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cd00c41-6ab1-4131-9007-a186b552cd12" containerName="cilium-operator" Jan 13 20:48:12.589762 systemd[1]: Started sshd@28-172.31.17.145:22-139.178.89.65:50294.service - OpenSSH per-connection server daemon (139.178.89.65:50294). Jan 13 20:48:12.633670 kubelet[3400]: I0113 20:48:12.630998 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d9bbd3e-031b-4c00-90de-d67d47649a37-lib-modules\") pod \"cilium-52z7c\" (UID: \"9d9bbd3e-031b-4c00-90de-d67d47649a37\") " pod="kube-system/cilium-52z7c" Jan 13 20:48:12.633670 kubelet[3400]: I0113 20:48:12.631061 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9d9bbd3e-031b-4c00-90de-d67d47649a37-clustermesh-secrets\") pod \"cilium-52z7c\" (UID: \"9d9bbd3e-031b-4c00-90de-d67d47649a37\") " pod="kube-system/cilium-52z7c" Jan 13 20:48:12.633670 kubelet[3400]: I0113 20:48:12.631093 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9d9bbd3e-031b-4c00-90de-d67d47649a37-cilium-ipsec-secrets\") pod \"cilium-52z7c\" (UID: \"9d9bbd3e-031b-4c00-90de-d67d47649a37\") " pod="kube-system/cilium-52z7c" Jan 13 20:48:12.633670 kubelet[3400]: I0113 20:48:12.631125 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8mps\" (UniqueName: \"kubernetes.io/projected/9d9bbd3e-031b-4c00-90de-d67d47649a37-kube-api-access-c8mps\") pod \"cilium-52z7c\" (UID: \"9d9bbd3e-031b-4c00-90de-d67d47649a37\") " pod="kube-system/cilium-52z7c" Jan 13 20:48:12.633473 systemd[1]: Created slice kubepods-burstable-pod9d9bbd3e_031b_4c00_90de_d67d47649a37.slice - libcontainer container kubepods-burstable-pod9d9bbd3e_031b_4c00_90de_d67d47649a37.slice. Jan 13 20:48:12.638181 kubelet[3400]: I0113 20:48:12.637527 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9d9bbd3e-031b-4c00-90de-d67d47649a37-hubble-tls\") pod \"cilium-52z7c\" (UID: \"9d9bbd3e-031b-4c00-90de-d67d47649a37\") " pod="kube-system/cilium-52z7c" Jan 13 20:48:12.638181 kubelet[3400]: I0113 20:48:12.637605 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9d9bbd3e-031b-4c00-90de-d67d47649a37-bpf-maps\") pod \"cilium-52z7c\" (UID: \"9d9bbd3e-031b-4c00-90de-d67d47649a37\") " pod="kube-system/cilium-52z7c" Jan 13 20:48:12.638181 kubelet[3400]: I0113 20:48:12.637638 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9d9bbd3e-031b-4c00-90de-d67d47649a37-host-proc-sys-net\") pod \"cilium-52z7c\" (UID: \"9d9bbd3e-031b-4c00-90de-d67d47649a37\") " pod="kube-system/cilium-52z7c" Jan 13 20:48:12.638181 kubelet[3400]: I0113 20:48:12.637669 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9d9bbd3e-031b-4c00-90de-d67d47649a37-cilium-cgroup\") pod \"cilium-52z7c\" (UID: \"9d9bbd3e-031b-4c00-90de-d67d47649a37\") " pod="kube-system/cilium-52z7c" Jan 13 20:48:12.638181 kubelet[3400]: I0113 20:48:12.637697 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9d9bbd3e-031b-4c00-90de-d67d47649a37-cni-path\") pod \"cilium-52z7c\" (UID: \"9d9bbd3e-031b-4c00-90de-d67d47649a37\") " pod="kube-system/cilium-52z7c" Jan 13 20:48:12.638181 kubelet[3400]: I0113 20:48:12.637740 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d9bbd3e-031b-4c00-90de-d67d47649a37-cilium-config-path\") pod \"cilium-52z7c\" (UID: \"9d9bbd3e-031b-4c00-90de-d67d47649a37\") " pod="kube-system/cilium-52z7c" Jan 13 20:48:12.638560 kubelet[3400]: I0113 20:48:12.637769 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9d9bbd3e-031b-4c00-90de-d67d47649a37-cilium-run\") pod \"cilium-52z7c\" (UID: \"9d9bbd3e-031b-4c00-90de-d67d47649a37\") " pod="kube-system/cilium-52z7c" Jan 13 20:48:12.638560 kubelet[3400]: I0113 20:48:12.637797 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9d9bbd3e-031b-4c00-90de-d67d47649a37-hostproc\") pod \"cilium-52z7c\" (UID: \"9d9bbd3e-031b-4c00-90de-d67d47649a37\") " pod="kube-system/cilium-52z7c" Jan 13 20:48:12.638560 kubelet[3400]: I0113 20:48:12.637827 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9d9bbd3e-031b-4c00-90de-d67d47649a37-host-proc-sys-kernel\") pod \"cilium-52z7c\" (UID: \"9d9bbd3e-031b-4c00-90de-d67d47649a37\") " pod="kube-system/cilium-52z7c" Jan 13 20:48:12.638560 kubelet[3400]: I0113 20:48:12.637864 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d9bbd3e-031b-4c00-90de-d67d47649a37-etc-cni-netd\") pod \"cilium-52z7c\" (UID: \"9d9bbd3e-031b-4c00-90de-d67d47649a37\") " pod="kube-system/cilium-52z7c" Jan 13 20:48:12.638560 kubelet[3400]: I0113 20:48:12.637892 3400 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d9bbd3e-031b-4c00-90de-d67d47649a37-xtables-lock\") pod \"cilium-52z7c\" (UID: \"9d9bbd3e-031b-4c00-90de-d67d47649a37\") " pod="kube-system/cilium-52z7c" Jan 13 20:48:12.670764 kubelet[3400]: W0113 20:48:12.670487 3400 reflector.go:539] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ip-172-31-17-145" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-17-145' and this object Jan 13 20:48:12.670764 kubelet[3400]: W0113 20:48:12.670623 3400 reflector.go:539] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-17-145" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-17-145' and this object Jan 13 20:48:12.674032 kubelet[3400]: E0113 20:48:12.673041 3400 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-17-145" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-17-145' and this object Jan 13 20:48:12.674332 kubelet[3400]: E0113 20:48:12.674180 3400 reflector.go:147] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ip-172-31-17-145" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-17-145' and this object Jan 13 20:48:12.674332 kubelet[3400]: W0113 20:48:12.674282 3400 reflector.go:539] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-17-145" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-17-145' and this object Jan 13 20:48:12.674332 kubelet[3400]: E0113 20:48:12.674303 3400 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-17-145" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-17-145' and this object Jan 13 20:48:12.674634 kubelet[3400]: W0113 20:48:12.674592 3400 reflector.go:539] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-17-145" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-17-145' and this object Jan 13 20:48:12.674634 kubelet[3400]: E0113 20:48:12.674619 3400 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-17-145" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-17-145' and this object Jan 13 20:48:12.821906 sshd[5176]: Accepted publickey for core from 139.178.89.65 port 50294 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:48:12.823771 sshd-session[5176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:48:12.828945 systemd-logind[1872]: New session 29 of user core. Jan 13 20:48:12.835345 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 13 20:48:12.912596 kubelet[3400]: E0113 20:48:12.912552 3400 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:48:12.953560 sshd[5179]: Connection closed by 139.178.89.65 port 50294 Jan 13 20:48:12.959792 sshd-session[5176]: pam_unix(sshd:session): session closed for user core Jan 13 20:48:12.964340 systemd[1]: sshd@28-172.31.17.145:22-139.178.89.65:50294.service: Deactivated successfully. Jan 13 20:48:12.966932 systemd[1]: session-29.scope: Deactivated successfully. Jan 13 20:48:12.969008 systemd-logind[1872]: Session 29 logged out. Waiting for processes to exit. Jan 13 20:48:12.970364 systemd-logind[1872]: Removed session 29. Jan 13 20:48:12.990660 systemd[1]: Started sshd@29-172.31.17.145:22-139.178.89.65:50310.service - OpenSSH per-connection server daemon (139.178.89.65:50310). Jan 13 20:48:13.158407 sshd[5185]: Accepted publickey for core from 139.178.89.65 port 50310 ssh2: RSA SHA256:+eazbqo4LC0pKoRk6UgaTVE8Lwm88kdGlrCb+WVmbZI Jan 13 20:48:13.159997 sshd-session[5185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:48:13.166226 systemd-logind[1872]: New session 30 of user core. Jan 13 20:48:13.171573 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 13 20:48:13.751074 kubelet[3400]: E0113 20:48:13.751024 3400 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jan 13 20:48:13.752779 kubelet[3400]: E0113 20:48:13.752496 3400 secret.go:194] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Jan 13 20:48:13.755012 kubelet[3400]: E0113 20:48:13.754970 3400 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jan 13 20:48:13.755276 kubelet[3400]: E0113 20:48:13.755010 3400 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-52z7c: failed to sync secret cache: timed out waiting for the condition Jan 13 20:48:13.755276 kubelet[3400]: E0113 20:48:13.755220 3400 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jan 13 20:48:13.772007 kubelet[3400]: E0113 20:48:13.771908 3400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9d9bbd3e-031b-4c00-90de-d67d47649a37-cilium-config-path podName:9d9bbd3e-031b-4c00-90de-d67d47649a37 nodeName:}" failed. No retries permitted until 2025-01-13 20:48:14.251122057 +0000 UTC m=+116.863730896 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/9d9bbd3e-031b-4c00-90de-d67d47649a37-cilium-config-path") pod "cilium-52z7c" (UID: "9d9bbd3e-031b-4c00-90de-d67d47649a37") : failed to sync configmap cache: timed out waiting for the condition Jan 13 20:48:13.772231 kubelet[3400]: E0113 20:48:13.772043 3400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d9bbd3e-031b-4c00-90de-d67d47649a37-cilium-ipsec-secrets podName:9d9bbd3e-031b-4c00-90de-d67d47649a37 nodeName:}" failed. No retries permitted until 2025-01-13 20:48:14.271995597 +0000 UTC m=+116.884604447 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/9d9bbd3e-031b-4c00-90de-d67d47649a37-cilium-ipsec-secrets") pod "cilium-52z7c" (UID: "9d9bbd3e-031b-4c00-90de-d67d47649a37") : failed to sync secret cache: timed out waiting for the condition Jan 13 20:48:13.772231 kubelet[3400]: E0113 20:48:13.772063 3400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d9bbd3e-031b-4c00-90de-d67d47649a37-hubble-tls podName:9d9bbd3e-031b-4c00-90de-d67d47649a37 nodeName:}" failed. No retries permitted until 2025-01-13 20:48:14.272054142 +0000 UTC m=+116.884662977 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/9d9bbd3e-031b-4c00-90de-d67d47649a37-hubble-tls") pod "cilium-52z7c" (UID: "9d9bbd3e-031b-4c00-90de-d67d47649a37") : failed to sync secret cache: timed out waiting for the condition Jan 13 20:48:13.772231 kubelet[3400]: E0113 20:48:13.772082 3400 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d9bbd3e-031b-4c00-90de-d67d47649a37-clustermesh-secrets podName:9d9bbd3e-031b-4c00-90de-d67d47649a37 nodeName:}" failed. No retries permitted until 2025-01-13 20:48:14.272074122 +0000 UTC m=+116.884682960 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/9d9bbd3e-031b-4c00-90de-d67d47649a37-clustermesh-secrets") pod "cilium-52z7c" (UID: "9d9bbd3e-031b-4c00-90de-d67d47649a37") : failed to sync secret cache: timed out waiting for the condition Jan 13 20:48:14.469102 containerd[1893]: time="2025-01-13T20:48:14.469051529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-52z7c,Uid:9d9bbd3e-031b-4c00-90de-d67d47649a37,Namespace:kube-system,Attempt:0,}" Jan 13 20:48:14.508642 containerd[1893]: time="2025-01-13T20:48:14.508515663Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:48:14.508642 containerd[1893]: time="2025-01-13T20:48:14.508569843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:48:14.508642 containerd[1893]: time="2025-01-13T20:48:14.508586167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:48:14.508888 containerd[1893]: time="2025-01-13T20:48:14.508681210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:48:14.539381 systemd[1]: Started cri-containerd-9fadc11ae9854f170b6d6f4dfb151d38d27e61c97b10c310d92d66cedb0d1eba.scope - libcontainer container 9fadc11ae9854f170b6d6f4dfb151d38d27e61c97b10c310d92d66cedb0d1eba. Jan 13 20:48:14.569287 containerd[1893]: time="2025-01-13T20:48:14.569219444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-52z7c,Uid:9d9bbd3e-031b-4c00-90de-d67d47649a37,Namespace:kube-system,Attempt:0,} returns sandbox id \"9fadc11ae9854f170b6d6f4dfb151d38d27e61c97b10c310d92d66cedb0d1eba\"" Jan 13 20:48:14.573273 containerd[1893]: time="2025-01-13T20:48:14.573223630Z" level=info msg="CreateContainer within sandbox \"9fadc11ae9854f170b6d6f4dfb151d38d27e61c97b10c310d92d66cedb0d1eba\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:48:14.590648 containerd[1893]: time="2025-01-13T20:48:14.590497781Z" level=info msg="CreateContainer within sandbox \"9fadc11ae9854f170b6d6f4dfb151d38d27e61c97b10c310d92d66cedb0d1eba\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"91b13f948ca13553aaae50c8350de4dd6f5ae2ba04672455b23eb79d81634025\"" Jan 13 20:48:14.592207 containerd[1893]: time="2025-01-13T20:48:14.591393628Z" level=info msg="StartContainer for \"91b13f948ca13553aaae50c8350de4dd6f5ae2ba04672455b23eb79d81634025\"" Jan 13 20:48:14.616276 kubelet[3400]: E0113 20:48:14.616240 3400 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-p4v68" podUID="d0e18256-5ab2-44bb-8867-03771b5ef763" Jan 13 20:48:14.625783 systemd[1]: Started cri-containerd-91b13f948ca13553aaae50c8350de4dd6f5ae2ba04672455b23eb79d81634025.scope - libcontainer container 91b13f948ca13553aaae50c8350de4dd6f5ae2ba04672455b23eb79d81634025. Jan 13 20:48:14.665795 containerd[1893]: time="2025-01-13T20:48:14.664046682Z" level=info msg="StartContainer for \"91b13f948ca13553aaae50c8350de4dd6f5ae2ba04672455b23eb79d81634025\" returns successfully" Jan 13 20:48:14.687242 systemd[1]: cri-containerd-91b13f948ca13553aaae50c8350de4dd6f5ae2ba04672455b23eb79d81634025.scope: Deactivated successfully. Jan 13 20:48:14.730760 containerd[1893]: time="2025-01-13T20:48:14.730545941Z" level=info msg="shim disconnected" id=91b13f948ca13553aaae50c8350de4dd6f5ae2ba04672455b23eb79d81634025 namespace=k8s.io Jan 13 20:48:14.730760 containerd[1893]: time="2025-01-13T20:48:14.730610118Z" level=warning msg="cleaning up after shim disconnected" id=91b13f948ca13553aaae50c8350de4dd6f5ae2ba04672455b23eb79d81634025 namespace=k8s.io Jan 13 20:48:14.730760 containerd[1893]: time="2025-01-13T20:48:14.730620821Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:48:15.221590 containerd[1893]: time="2025-01-13T20:48:15.221529782Z" level=info msg="CreateContainer within sandbox \"9fadc11ae9854f170b6d6f4dfb151d38d27e61c97b10c310d92d66cedb0d1eba\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:48:15.361806 containerd[1893]: time="2025-01-13T20:48:15.361754880Z" level=info msg="CreateContainer within sandbox \"9fadc11ae9854f170b6d6f4dfb151d38d27e61c97b10c310d92d66cedb0d1eba\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7853717b25a986f3d5011c215fcdb5c2a48449c580a9e971ce89d04cc82050e3\"" Jan 13 20:48:15.368222 containerd[1893]: time="2025-01-13T20:48:15.365617528Z" level=info msg="StartContainer for \"7853717b25a986f3d5011c215fcdb5c2a48449c580a9e971ce89d04cc82050e3\"" Jan 13 20:48:15.444107 systemd[1]: Started cri-containerd-7853717b25a986f3d5011c215fcdb5c2a48449c580a9e971ce89d04cc82050e3.scope - libcontainer container 7853717b25a986f3d5011c215fcdb5c2a48449c580a9e971ce89d04cc82050e3. Jan 13 20:48:15.494102 containerd[1893]: time="2025-01-13T20:48:15.493966247Z" level=info msg="StartContainer for \"7853717b25a986f3d5011c215fcdb5c2a48449c580a9e971ce89d04cc82050e3\" returns successfully" Jan 13 20:48:15.505845 systemd[1]: cri-containerd-7853717b25a986f3d5011c215fcdb5c2a48449c580a9e971ce89d04cc82050e3.scope: Deactivated successfully. Jan 13 20:48:15.530353 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7853717b25a986f3d5011c215fcdb5c2a48449c580a9e971ce89d04cc82050e3-rootfs.mount: Deactivated successfully. Jan 13 20:48:15.539702 containerd[1893]: time="2025-01-13T20:48:15.539638177Z" level=info msg="shim disconnected" id=7853717b25a986f3d5011c215fcdb5c2a48449c580a9e971ce89d04cc82050e3 namespace=k8s.io Jan 13 20:48:15.539702 containerd[1893]: time="2025-01-13T20:48:15.539697142Z" level=warning msg="cleaning up after shim disconnected" id=7853717b25a986f3d5011c215fcdb5c2a48449c580a9e971ce89d04cc82050e3 namespace=k8s.io Jan 13 20:48:15.539702 containerd[1893]: time="2025-01-13T20:48:15.539708242Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:48:16.231080 containerd[1893]: time="2025-01-13T20:48:16.230884836Z" level=info msg="CreateContainer within sandbox \"9fadc11ae9854f170b6d6f4dfb151d38d27e61c97b10c310d92d66cedb0d1eba\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:48:16.276192 containerd[1893]: time="2025-01-13T20:48:16.275641774Z" level=info msg="CreateContainer within sandbox \"9fadc11ae9854f170b6d6f4dfb151d38d27e61c97b10c310d92d66cedb0d1eba\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ac68122c99b81dfd2808a70a0a6d6af52eeb145af816c0b215f66249fc47e7fd\"" Jan 13 20:48:16.277342 containerd[1893]: time="2025-01-13T20:48:16.277297673Z" level=info msg="StartContainer for \"ac68122c99b81dfd2808a70a0a6d6af52eeb145af816c0b215f66249fc47e7fd\"" Jan 13 20:48:16.322891 systemd[1]: Started cri-containerd-ac68122c99b81dfd2808a70a0a6d6af52eeb145af816c0b215f66249fc47e7fd.scope - libcontainer container ac68122c99b81dfd2808a70a0a6d6af52eeb145af816c0b215f66249fc47e7fd. Jan 13 20:48:16.370931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3626030115.mount: Deactivated successfully. Jan 13 20:48:16.374912 containerd[1893]: time="2025-01-13T20:48:16.374813205Z" level=info msg="StartContainer for \"ac68122c99b81dfd2808a70a0a6d6af52eeb145af816c0b215f66249fc47e7fd\" returns successfully" Jan 13 20:48:16.388235 systemd[1]: cri-containerd-ac68122c99b81dfd2808a70a0a6d6af52eeb145af816c0b215f66249fc47e7fd.scope: Deactivated successfully. Jan 13 20:48:16.413876 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac68122c99b81dfd2808a70a0a6d6af52eeb145af816c0b215f66249fc47e7fd-rootfs.mount: Deactivated successfully. Jan 13 20:48:16.423714 containerd[1893]: time="2025-01-13T20:48:16.423469876Z" level=info msg="shim disconnected" id=ac68122c99b81dfd2808a70a0a6d6af52eeb145af816c0b215f66249fc47e7fd namespace=k8s.io Jan 13 20:48:16.423714 containerd[1893]: time="2025-01-13T20:48:16.423541394Z" level=warning msg="cleaning up after shim disconnected" id=ac68122c99b81dfd2808a70a0a6d6af52eeb145af816c0b215f66249fc47e7fd namespace=k8s.io Jan 13 20:48:16.423714 containerd[1893]: time="2025-01-13T20:48:16.423555693Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:48:16.619175 kubelet[3400]: E0113 20:48:16.619112 3400 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-p4v68" podUID="d0e18256-5ab2-44bb-8867-03771b5ef763" Jan 13 20:48:17.231767 containerd[1893]: time="2025-01-13T20:48:17.231227549Z" level=info msg="CreateContainer within sandbox \"9fadc11ae9854f170b6d6f4dfb151d38d27e61c97b10c310d92d66cedb0d1eba\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:48:17.258429 containerd[1893]: time="2025-01-13T20:48:17.258364276Z" level=info msg="CreateContainer within sandbox \"9fadc11ae9854f170b6d6f4dfb151d38d27e61c97b10c310d92d66cedb0d1eba\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c200c7c9ef1175265b835d1556ad9b79ac336208fe5382c1bd436ba396e1080f\"" Jan 13 20:48:17.260081 containerd[1893]: time="2025-01-13T20:48:17.259335734Z" level=info msg="StartContainer for \"c200c7c9ef1175265b835d1556ad9b79ac336208fe5382c1bd436ba396e1080f\"" Jan 13 20:48:17.303487 systemd[1]: Started cri-containerd-c200c7c9ef1175265b835d1556ad9b79ac336208fe5382c1bd436ba396e1080f.scope - libcontainer container c200c7c9ef1175265b835d1556ad9b79ac336208fe5382c1bd436ba396e1080f. Jan 13 20:48:17.336423 systemd[1]: cri-containerd-c200c7c9ef1175265b835d1556ad9b79ac336208fe5382c1bd436ba396e1080f.scope: Deactivated successfully. Jan 13 20:48:17.340453 containerd[1893]: time="2025-01-13T20:48:17.340303876Z" level=info msg="StartContainer for \"c200c7c9ef1175265b835d1556ad9b79ac336208fe5382c1bd436ba396e1080f\" returns successfully" Jan 13 20:48:17.368614 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c200c7c9ef1175265b835d1556ad9b79ac336208fe5382c1bd436ba396e1080f-rootfs.mount: Deactivated successfully. Jan 13 20:48:17.375386 containerd[1893]: time="2025-01-13T20:48:17.375320835Z" level=info msg="shim disconnected" id=c200c7c9ef1175265b835d1556ad9b79ac336208fe5382c1bd436ba396e1080f namespace=k8s.io Jan 13 20:48:17.375386 containerd[1893]: time="2025-01-13T20:48:17.375386639Z" level=warning msg="cleaning up after shim disconnected" id=c200c7c9ef1175265b835d1556ad9b79ac336208fe5382c1bd436ba396e1080f namespace=k8s.io Jan 13 20:48:17.375777 containerd[1893]: time="2025-01-13T20:48:17.375399871Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:48:17.913663 kubelet[3400]: E0113 20:48:17.913626 3400 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:48:17.942192 containerd[1893]: time="2025-01-13T20:48:17.942123407Z" level=info msg="StopPodSandbox for \"6c1328c6bfdfe02de34e5cd11dea9df6077711d2f135e6d60447ec72fb4b8d5c\"" Jan 13 20:48:17.942367 containerd[1893]: time="2025-01-13T20:48:17.942259902Z" level=info msg="TearDown network for sandbox \"6c1328c6bfdfe02de34e5cd11dea9df6077711d2f135e6d60447ec72fb4b8d5c\" successfully" Jan 13 20:48:17.942367 containerd[1893]: time="2025-01-13T20:48:17.942276170Z" level=info msg="StopPodSandbox for \"6c1328c6bfdfe02de34e5cd11dea9df6077711d2f135e6d60447ec72fb4b8d5c\" returns successfully" Jan 13 20:48:17.942991 containerd[1893]: time="2025-01-13T20:48:17.942955335Z" level=info msg="RemovePodSandbox for \"6c1328c6bfdfe02de34e5cd11dea9df6077711d2f135e6d60447ec72fb4b8d5c\"" Jan 13 20:48:17.952087 containerd[1893]: time="2025-01-13T20:48:17.952037646Z" level=info msg="Forcibly stopping sandbox \"6c1328c6bfdfe02de34e5cd11dea9df6077711d2f135e6d60447ec72fb4b8d5c\"" Jan 13 20:48:17.952287 containerd[1893]: time="2025-01-13T20:48:17.952211277Z" level=info msg="TearDown network for sandbox \"6c1328c6bfdfe02de34e5cd11dea9df6077711d2f135e6d60447ec72fb4b8d5c\" successfully" Jan 13 20:48:17.958722 containerd[1893]: time="2025-01-13T20:48:17.958669227Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6c1328c6bfdfe02de34e5cd11dea9df6077711d2f135e6d60447ec72fb4b8d5c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:48:17.958890 containerd[1893]: time="2025-01-13T20:48:17.958753712Z" level=info msg="RemovePodSandbox \"6c1328c6bfdfe02de34e5cd11dea9df6077711d2f135e6d60447ec72fb4b8d5c\" returns successfully" Jan 13 20:48:17.959498 containerd[1893]: time="2025-01-13T20:48:17.959459605Z" level=info msg="StopPodSandbox for \"112766182759871a94929d527970ba024cab3c0db569d7b0aca61e5a6455f59b\"" Jan 13 20:48:17.959595 containerd[1893]: time="2025-01-13T20:48:17.959561818Z" level=info msg="TearDown network for sandbox \"112766182759871a94929d527970ba024cab3c0db569d7b0aca61e5a6455f59b\" successfully" Jan 13 20:48:17.959595 containerd[1893]: time="2025-01-13T20:48:17.959577730Z" level=info msg="StopPodSandbox for \"112766182759871a94929d527970ba024cab3c0db569d7b0aca61e5a6455f59b\" returns successfully" Jan 13 20:48:17.963171 containerd[1893]: time="2025-01-13T20:48:17.959900845Z" level=info msg="RemovePodSandbox for \"112766182759871a94929d527970ba024cab3c0db569d7b0aca61e5a6455f59b\"" Jan 13 20:48:17.963171 containerd[1893]: time="2025-01-13T20:48:17.959934216Z" level=info msg="Forcibly stopping sandbox \"112766182759871a94929d527970ba024cab3c0db569d7b0aca61e5a6455f59b\"" Jan 13 20:48:17.963171 containerd[1893]: time="2025-01-13T20:48:17.960010052Z" level=info msg="TearDown network for sandbox \"112766182759871a94929d527970ba024cab3c0db569d7b0aca61e5a6455f59b\" successfully" Jan 13 20:48:17.966087 containerd[1893]: time="2025-01-13T20:48:17.966030560Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"112766182759871a94929d527970ba024cab3c0db569d7b0aca61e5a6455f59b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:48:17.966351 containerd[1893]: time="2025-01-13T20:48:17.966175629Z" level=info msg="RemovePodSandbox \"112766182759871a94929d527970ba024cab3c0db569d7b0aca61e5a6455f59b\" returns successfully" Jan 13 20:48:18.240173 containerd[1893]: time="2025-01-13T20:48:18.240027580Z" level=info msg="CreateContainer within sandbox \"9fadc11ae9854f170b6d6f4dfb151d38d27e61c97b10c310d92d66cedb0d1eba\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:48:18.289078 containerd[1893]: time="2025-01-13T20:48:18.289028419Z" level=info msg="CreateContainer within sandbox \"9fadc11ae9854f170b6d6f4dfb151d38d27e61c97b10c310d92d66cedb0d1eba\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8230bc57ea557aac218659818643987c9d7d56c7956546bb9c76650a318d0190\"" Jan 13 20:48:18.289967 containerd[1893]: time="2025-01-13T20:48:18.289928191Z" level=info msg="StartContainer for \"8230bc57ea557aac218659818643987c9d7d56c7956546bb9c76650a318d0190\"" Jan 13 20:48:18.332366 systemd[1]: Started cri-containerd-8230bc57ea557aac218659818643987c9d7d56c7956546bb9c76650a318d0190.scope - libcontainer container 8230bc57ea557aac218659818643987c9d7d56c7956546bb9c76650a318d0190. Jan 13 20:48:18.367494 containerd[1893]: time="2025-01-13T20:48:18.367450226Z" level=info msg="StartContainer for \"8230bc57ea557aac218659818643987c9d7d56c7956546bb9c76650a318d0190\" returns successfully" Jan 13 20:48:18.406105 systemd[1]: run-containerd-runc-k8s.io-8230bc57ea557aac218659818643987c9d7d56c7956546bb9c76650a318d0190-runc.XLJFiK.mount: Deactivated successfully. Jan 13 20:48:18.615443 kubelet[3400]: E0113 20:48:18.615366 3400 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-2kg7z" podUID="65db7c41-5ea2-4551-b363-3b243862d863" Jan 13 20:48:18.650834 kubelet[3400]: E0113 20:48:18.650780 3400 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-p4v68" podUID="d0e18256-5ab2-44bb-8867-03771b5ef763" Jan 13 20:48:19.288738 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 13 20:48:19.887072 kubelet[3400]: E0113 20:48:19.887006 3400 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:58152->127.0.0.1:32959: write tcp 127.0.0.1:58152->127.0.0.1:32959: write: broken pipe Jan 13 20:48:19.956076 kubelet[3400]: I0113 20:48:19.956035 3400 setters.go:568] "Node became not ready" node="ip-172-31-17-145" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T20:48:19Z","lastTransitionTime":"2025-01-13T20:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 20:48:20.615860 kubelet[3400]: E0113 20:48:20.615820 3400 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-p4v68" podUID="d0e18256-5ab2-44bb-8867-03771b5ef763" Jan 13 20:48:20.616220 kubelet[3400]: E0113 20:48:20.616005 3400 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-2kg7z" podUID="65db7c41-5ea2-4551-b363-3b243862d863" Jan 13 20:48:21.991773 systemd[1]: run-containerd-runc-k8s.io-8230bc57ea557aac218659818643987c9d7d56c7956546bb9c76650a318d0190-runc.M4ELAX.mount: Deactivated successfully. Jan 13 20:48:22.371700 (udev-worker)[6057]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:48:22.373900 (udev-worker)[6058]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:48:22.383832 systemd-networkd[1729]: lxc_health: Link UP Jan 13 20:48:22.401761 systemd-networkd[1729]: lxc_health: Gained carrier Jan 13 20:48:22.505196 kubelet[3400]: I0113 20:48:22.505038 3400 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-52z7c" podStartSLOduration=10.504984889 podStartE2EDuration="10.504984889s" podCreationTimestamp="2025-01-13 20:48:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:48:19.273651629 +0000 UTC m=+121.886260487" watchObservedRunningTime="2025-01-13 20:48:22.504984889 +0000 UTC m=+125.117593754" Jan 13 20:48:22.617604 kubelet[3400]: E0113 20:48:22.616031 3400 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-p4v68" podUID="d0e18256-5ab2-44bb-8867-03771b5ef763" Jan 13 20:48:22.617604 kubelet[3400]: E0113 20:48:22.616291 3400 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-2kg7z" podUID="65db7c41-5ea2-4551-b363-3b243862d863" Jan 13 20:48:23.816336 systemd-networkd[1729]: lxc_health: Gained IPv6LL Jan 13 20:48:26.405887 ntpd[1867]: Listen normally on 15 lxc_health [fe80::705d:68ff:fe04:52e6%14]:123 Jan 13 20:48:26.406369 ntpd[1867]: 13 Jan 20:48:26 ntpd[1867]: Listen normally on 15 lxc_health [fe80::705d:68ff:fe04:52e6%14]:123 Jan 13 20:48:27.035339 kubelet[3400]: E0113 20:48:27.035245 3400 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:58182->127.0.0.1:32959: write tcp 127.0.0.1:58182->127.0.0.1:32959: write: broken pipe Jan 13 20:48:29.364066 sshd[5187]: Connection closed by 139.178.89.65 port 50310 Jan 13 20:48:29.365633 sshd-session[5185]: pam_unix(sshd:session): session closed for user core Jan 13 20:48:29.369386 systemd[1]: sshd@29-172.31.17.145:22-139.178.89.65:50310.service: Deactivated successfully. Jan 13 20:48:29.372043 systemd[1]: session-30.scope: Deactivated successfully. Jan 13 20:48:29.374420 systemd-logind[1872]: Session 30 logged out. Waiting for processes to exit. Jan 13 20:48:29.376332 systemd-logind[1872]: Removed session 30. Jan 13 20:49:06.729680 systemd[1]: cri-containerd-9f289ed9a522ce1cd40fd721adc9191020049f3edba93110a2d243e6e2043e25.scope: Deactivated successfully. Jan 13 20:49:06.730794 systemd[1]: cri-containerd-9f289ed9a522ce1cd40fd721adc9191020049f3edba93110a2d243e6e2043e25.scope: Consumed 3.969s CPU time, 27.8M memory peak, 0B memory swap peak. Jan 13 20:49:06.766566 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f289ed9a522ce1cd40fd721adc9191020049f3edba93110a2d243e6e2043e25-rootfs.mount: Deactivated successfully. Jan 13 20:49:06.774913 containerd[1893]: time="2025-01-13T20:49:06.774838093Z" level=info msg="shim disconnected" id=9f289ed9a522ce1cd40fd721adc9191020049f3edba93110a2d243e6e2043e25 namespace=k8s.io Jan 13 20:49:06.774913 containerd[1893]: time="2025-01-13T20:49:06.774911108Z" level=warning msg="cleaning up after shim disconnected" id=9f289ed9a522ce1cd40fd721adc9191020049f3edba93110a2d243e6e2043e25 namespace=k8s.io Jan 13 20:49:06.774913 containerd[1893]: time="2025-01-13T20:49:06.774922227Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:49:07.386888 kubelet[3400]: I0113 20:49:07.386853 3400 scope.go:117] "RemoveContainer" containerID="9f289ed9a522ce1cd40fd721adc9191020049f3edba93110a2d243e6e2043e25" Jan 13 20:49:07.391161 containerd[1893]: time="2025-01-13T20:49:07.391103548Z" level=info msg="CreateContainer within sandbox \"c3474f131ca19197e6962ccc6aea5593a70f2c1d33267bff2c860b5a827fe95a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 13 20:49:07.428102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2689711611.mount: Deactivated successfully. Jan 13 20:49:07.433636 containerd[1893]: time="2025-01-13T20:49:07.433579417Z" level=info msg="CreateContainer within sandbox \"c3474f131ca19197e6962ccc6aea5593a70f2c1d33267bff2c860b5a827fe95a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"92f543d0fb3537b8bcb2d68e0d44f272a4a2825d868b63a510a379e4436e9c00\"" Jan 13 20:49:07.434258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4027216407.mount: Deactivated successfully. Jan 13 20:49:07.436379 containerd[1893]: time="2025-01-13T20:49:07.434249351Z" level=info msg="StartContainer for \"92f543d0fb3537b8bcb2d68e0d44f272a4a2825d868b63a510a379e4436e9c00\"" Jan 13 20:49:07.480363 systemd[1]: Started cri-containerd-92f543d0fb3537b8bcb2d68e0d44f272a4a2825d868b63a510a379e4436e9c00.scope - libcontainer container 92f543d0fb3537b8bcb2d68e0d44f272a4a2825d868b63a510a379e4436e9c00. Jan 13 20:49:07.542680 containerd[1893]: time="2025-01-13T20:49:07.542629881Z" level=info msg="StartContainer for \"92f543d0fb3537b8bcb2d68e0d44f272a4a2825d868b63a510a379e4436e9c00\" returns successfully" Jan 13 20:49:10.267454 kubelet[3400]: E0113 20:49:10.267408 3400 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-145?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 13 20:49:12.328218 systemd[1]: cri-containerd-398c14a53b667423d476c9a29397d32ae3efdc1c760a2d6ca62e694dc66c191f.scope: Deactivated successfully. Jan 13 20:49:12.329054 systemd[1]: cri-containerd-398c14a53b667423d476c9a29397d32ae3efdc1c760a2d6ca62e694dc66c191f.scope: Consumed 1.668s CPU time, 18.4M memory peak, 0B memory swap peak. Jan 13 20:49:12.354901 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-398c14a53b667423d476c9a29397d32ae3efdc1c760a2d6ca62e694dc66c191f-rootfs.mount: Deactivated successfully. Jan 13 20:49:12.469409 containerd[1893]: time="2025-01-13T20:49:12.469342801Z" level=info msg="shim disconnected" id=398c14a53b667423d476c9a29397d32ae3efdc1c760a2d6ca62e694dc66c191f namespace=k8s.io Jan 13 20:49:12.469409 containerd[1893]: time="2025-01-13T20:49:12.469406582Z" level=warning msg="cleaning up after shim disconnected" id=398c14a53b667423d476c9a29397d32ae3efdc1c760a2d6ca62e694dc66c191f namespace=k8s.io Jan 13 20:49:12.469409 containerd[1893]: time="2025-01-13T20:49:12.469417101Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:49:13.443693 kubelet[3400]: I0113 20:49:13.443111 3400 scope.go:117] "RemoveContainer" containerID="398c14a53b667423d476c9a29397d32ae3efdc1c760a2d6ca62e694dc66c191f" Jan 13 20:49:13.459714 containerd[1893]: time="2025-01-13T20:49:13.458094678Z" level=info msg="CreateContainer within sandbox \"f441669f4476d202f95701f8137298aafd3b56669e8a4acc3f64d60c146ac937\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 13 20:49:13.495695 containerd[1893]: time="2025-01-13T20:49:13.495645838Z" level=info msg="CreateContainer within sandbox \"f441669f4476d202f95701f8137298aafd3b56669e8a4acc3f64d60c146ac937\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"1f2e31940688f2e73988aa5a044b6a61b644e13f50e722ef7d7a681f1190b64d\"" Jan 13 20:49:13.496638 containerd[1893]: time="2025-01-13T20:49:13.496602766Z" level=info msg="StartContainer for \"1f2e31940688f2e73988aa5a044b6a61b644e13f50e722ef7d7a681f1190b64d\"" Jan 13 20:49:13.590419 systemd[1]: run-containerd-runc-k8s.io-1f2e31940688f2e73988aa5a044b6a61b644e13f50e722ef7d7a681f1190b64d-runc.JDBxKf.mount: Deactivated successfully. Jan 13 20:49:13.609712 systemd[1]: Started cri-containerd-1f2e31940688f2e73988aa5a044b6a61b644e13f50e722ef7d7a681f1190b64d.scope - libcontainer container 1f2e31940688f2e73988aa5a044b6a61b644e13f50e722ef7d7a681f1190b64d. Jan 13 20:49:13.704274 containerd[1893]: time="2025-01-13T20:49:13.703994468Z" level=info msg="StartContainer for \"1f2e31940688f2e73988aa5a044b6a61b644e13f50e722ef7d7a681f1190b64d\" returns successfully" Jan 13 20:49:20.268352 kubelet[3400]: E0113 20:49:20.268257 3400 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-145?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"