Dec 13 01:31:24.106807 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:31:24.106845 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:31:24.106864 kernel: BIOS-provided physical RAM map: Dec 13 01:31:24.106875 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 01:31:24.106885 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 01:31:24.106896 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 01:31:24.107021 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Dec 13 01:31:24.107035 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Dec 13 01:31:24.107049 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Dec 13 01:31:24.107063 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 01:31:24.107076 kernel: NX (Execute Disable) protection: active Dec 13 01:31:24.107090 kernel: APIC: Static calls initialized Dec 13 01:31:24.107104 kernel: SMBIOS 2.7 present. Dec 13 01:31:24.107118 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Dec 13 01:31:24.107158 kernel: Hypervisor detected: KVM Dec 13 01:31:24.107171 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:31:24.107182 kernel: kvm-clock: using sched offset of 7619446743 cycles Dec 13 01:31:24.107195 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:31:24.107208 kernel: tsc: Detected 2500.004 MHz processor Dec 13 01:31:24.107220 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:31:24.107233 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:31:24.107249 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Dec 13 01:31:24.107263 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 01:31:24.107277 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:31:24.107290 kernel: Using GB pages for direct mapping Dec 13 01:31:24.107304 kernel: ACPI: Early table checksum verification disabled Dec 13 01:31:24.107317 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Dec 13 01:31:24.107331 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Dec 13 01:31:24.107344 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 01:31:24.107358 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 13 01:31:24.107375 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Dec 13 01:31:24.107388 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 01:31:24.107402 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 01:31:24.107415 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Dec 13 01:31:24.107429 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 01:31:24.107443 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Dec 13 01:31:24.107456 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Dec 13 01:31:24.107470 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 01:31:24.107484 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Dec 13 01:31:24.107500 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Dec 13 01:31:24.107519 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Dec 13 01:31:24.107533 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Dec 13 01:31:24.107548 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Dec 13 01:31:24.107562 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Dec 13 01:31:24.107579 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Dec 13 01:31:24.107595 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Dec 13 01:31:24.107610 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Dec 13 01:31:24.107624 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Dec 13 01:31:24.107639 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 01:31:24.107654 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 01:31:24.107669 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Dec 13 01:31:24.107685 kernel: NUMA: Initialized distance table, cnt=1 Dec 13 01:31:24.107700 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Dec 13 01:31:24.107717 kernel: Zone ranges: Dec 13 01:31:24.107731 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:31:24.107745 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Dec 13 01:31:24.107759 kernel: Normal empty Dec 13 01:31:24.107772 kernel: Movable zone start for each node Dec 13 01:31:24.107786 kernel: Early memory node ranges Dec 13 01:31:24.107800 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 01:31:24.107814 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Dec 13 01:31:24.107828 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Dec 13 01:31:24.107842 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:31:24.107858 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 01:31:24.107872 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Dec 13 01:31:24.107886 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 01:31:24.107899 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:31:24.107913 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Dec 13 01:31:24.107927 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:31:24.107941 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:31:24.107954 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:31:24.107968 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:31:24.107984 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:31:24.107998 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 01:31:24.108011 kernel: TSC deadline timer available Dec 13 01:31:24.108025 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 01:31:24.108039 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 01:31:24.108053 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Dec 13 01:31:24.108066 kernel: Booting paravirtualized kernel on KVM Dec 13 01:31:24.108081 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:31:24.108095 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 01:31:24.108111 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 01:31:24.108158 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 01:31:24.108172 kernel: pcpu-alloc: [0] 0 1 Dec 13 01:31:24.108185 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:31:24.108199 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:31:24.108302 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:31:24.108321 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:31:24.108335 kernel: random: crng init done Dec 13 01:31:24.108353 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:31:24.108367 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 01:31:24.108381 kernel: Fallback order for Node 0: 0 Dec 13 01:31:24.108395 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Dec 13 01:31:24.108408 kernel: Policy zone: DMA32 Dec 13 01:31:24.108422 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:31:24.108436 kernel: Memory: 1932344K/2057760K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 125156K reserved, 0K cma-reserved) Dec 13 01:31:24.108450 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:31:24.108464 kernel: Kernel/User page tables isolation: enabled Dec 13 01:31:24.108481 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:31:24.108495 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:31:24.108509 kernel: Dynamic Preempt: voluntary Dec 13 01:31:24.108523 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:31:24.108538 kernel: rcu: RCU event tracing is enabled. Dec 13 01:31:24.108552 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:31:24.108567 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:31:24.108590 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:31:24.108605 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:31:24.108622 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:31:24.108635 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:31:24.108649 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 01:31:24.108663 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:31:24.108677 kernel: Console: colour VGA+ 80x25 Dec 13 01:31:24.108691 kernel: printk: console [ttyS0] enabled Dec 13 01:31:24.108705 kernel: ACPI: Core revision 20230628 Dec 13 01:31:24.108719 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Dec 13 01:31:24.108733 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:31:24.108750 kernel: x2apic enabled Dec 13 01:31:24.108764 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:31:24.108788 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Dec 13 01:31:24.108806 kernel: Calibrating delay loop (skipped) preset value.. 5000.00 BogoMIPS (lpj=2500004) Dec 13 01:31:24.108821 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 01:31:24.108836 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 01:31:24.108850 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:31:24.108865 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:31:24.108879 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:31:24.108893 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:31:24.108908 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 01:31:24.108923 kernel: RETBleed: Vulnerable Dec 13 01:31:24.108941 kernel: Speculative Store Bypass: Vulnerable Dec 13 01:31:24.108955 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:31:24.108970 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:31:24.108985 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 01:31:24.108999 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:31:24.109014 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:31:24.109029 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:31:24.109046 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 13 01:31:24.109061 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 13 01:31:24.109075 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 01:31:24.109090 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 01:31:24.109105 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 01:31:24.109120 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 13 01:31:24.109154 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:31:24.109169 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 13 01:31:24.109274 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 13 01:31:24.109290 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Dec 13 01:31:24.109309 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Dec 13 01:31:24.109323 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Dec 13 01:31:24.109339 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Dec 13 01:31:24.109353 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Dec 13 01:31:24.109368 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:31:24.109383 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:31:24.109398 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:31:24.109493 kernel: landlock: Up and running. Dec 13 01:31:24.109511 kernel: SELinux: Initializing. Dec 13 01:31:24.109524 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:31:24.109538 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:31:24.109551 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8175M CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x4) Dec 13 01:31:24.109569 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:31:24.109584 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:31:24.109598 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:31:24.109612 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 01:31:24.109629 kernel: signal: max sigframe size: 3632 Dec 13 01:31:24.109643 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:31:24.109658 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:31:24.109674 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 01:31:24.109689 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:31:24.109708 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:31:24.109721 kernel: .... node #0, CPUs: #1 Dec 13 01:31:24.109737 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 01:31:24.109754 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 01:31:24.109770 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:31:24.109788 kernel: smpboot: Max logical packages: 1 Dec 13 01:31:24.109802 kernel: smpboot: Total of 2 processors activated (10000.01 BogoMIPS) Dec 13 01:31:24.109889 kernel: devtmpfs: initialized Dec 13 01:31:24.109907 kernel: x86/mm: Memory block size: 128MB Dec 13 01:31:24.109922 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:31:24.109938 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:31:24.109954 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:31:24.109971 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:31:24.109986 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:31:24.110000 kernel: audit: type=2000 audit(1734053483.107:1): state=initialized audit_enabled=0 res=1 Dec 13 01:31:24.110013 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:31:24.110027 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:31:24.110043 kernel: cpuidle: using governor menu Dec 13 01:31:24.110057 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:31:24.110074 kernel: dca service started, version 1.12.1 Dec 13 01:31:24.110089 kernel: PCI: Using configuration type 1 for base access Dec 13 01:31:24.110105 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:31:24.110146 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:31:24.110161 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:31:24.110176 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:31:24.110191 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:31:24.110210 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:31:24.110225 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:31:24.110240 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:31:24.110255 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:31:24.110270 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 01:31:24.110284 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:31:24.110306 kernel: ACPI: Interpreter enabled Dec 13 01:31:24.110324 kernel: ACPI: PM: (supports S0 S5) Dec 13 01:31:24.110357 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:31:24.110385 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:31:24.110401 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 01:31:24.110417 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 01:31:24.110433 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:31:24.110656 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:31:24.110795 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 13 01:31:24.110923 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 13 01:31:24.110943 kernel: acpiphp: Slot [3] registered Dec 13 01:31:24.110963 kernel: acpiphp: Slot [4] registered Dec 13 01:31:24.110980 kernel: acpiphp: Slot [5] registered Dec 13 01:31:24.110996 kernel: acpiphp: Slot [6] registered Dec 13 01:31:24.111012 kernel: acpiphp: Slot [7] registered Dec 13 01:31:24.111028 kernel: acpiphp: Slot [8] registered Dec 13 01:31:24.111044 kernel: acpiphp: Slot [9] registered Dec 13 01:31:24.111060 kernel: acpiphp: Slot [10] registered Dec 13 01:31:24.111076 kernel: acpiphp: Slot [11] registered Dec 13 01:31:24.111091 kernel: acpiphp: Slot [12] registered Dec 13 01:31:24.111110 kernel: acpiphp: Slot [13] registered Dec 13 01:31:24.111146 kernel: acpiphp: Slot [14] registered Dec 13 01:31:24.111162 kernel: acpiphp: Slot [15] registered Dec 13 01:31:24.111178 kernel: acpiphp: Slot [16] registered Dec 13 01:31:24.111194 kernel: acpiphp: Slot [17] registered Dec 13 01:31:24.111210 kernel: acpiphp: Slot [18] registered Dec 13 01:31:24.111226 kernel: acpiphp: Slot [19] registered Dec 13 01:31:24.111242 kernel: acpiphp: Slot [20] registered Dec 13 01:31:24.111258 kernel: acpiphp: Slot [21] registered Dec 13 01:31:24.111278 kernel: acpiphp: Slot [22] registered Dec 13 01:31:24.111293 kernel: acpiphp: Slot [23] registered Dec 13 01:31:24.111309 kernel: acpiphp: Slot [24] registered Dec 13 01:31:24.111325 kernel: acpiphp: Slot [25] registered Dec 13 01:31:24.111341 kernel: acpiphp: Slot [26] registered Dec 13 01:31:24.111356 kernel: acpiphp: Slot [27] registered Dec 13 01:31:24.111372 kernel: acpiphp: Slot [28] registered Dec 13 01:31:24.111388 kernel: acpiphp: Slot [29] registered Dec 13 01:31:24.111404 kernel: acpiphp: Slot [30] registered Dec 13 01:31:24.111420 kernel: acpiphp: Slot [31] registered Dec 13 01:31:24.111438 kernel: PCI host bridge to bus 0000:00 Dec 13 01:31:24.111571 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:31:24.111808 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:31:24.111930 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:31:24.112115 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 01:31:24.112249 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:31:24.112394 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 01:31:24.112631 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 01:31:24.112785 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Dec 13 01:31:24.112925 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 01:31:24.113066 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Dec 13 01:31:24.113241 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Dec 13 01:31:24.113398 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Dec 13 01:31:24.113725 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Dec 13 01:31:24.114704 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Dec 13 01:31:24.114879 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Dec 13 01:31:24.115700 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Dec 13 01:31:24.117795 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x180 took 10742 usecs Dec 13 01:31:24.118558 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Dec 13 01:31:24.118701 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Dec 13 01:31:24.119043 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Dec 13 01:31:24.119260 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:31:24.119535 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 01:31:24.119697 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Dec 13 01:31:24.119864 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 01:31:24.120104 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Dec 13 01:31:24.120175 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:31:24.120199 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:31:24.120212 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:31:24.120225 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:31:24.120287 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 01:31:24.120312 kernel: iommu: Default domain type: Translated Dec 13 01:31:24.120326 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:31:24.120342 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:31:24.120358 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:31:24.120374 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 01:31:24.120392 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Dec 13 01:31:24.124012 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Dec 13 01:31:24.124293 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Dec 13 01:31:24.124695 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:31:24.124728 kernel: vgaarb: loaded Dec 13 01:31:24.124747 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Dec 13 01:31:24.124767 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Dec 13 01:31:24.124785 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:31:24.124813 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:31:24.124832 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:31:24.124850 kernel: pnp: PnP ACPI init Dec 13 01:31:24.124868 kernel: pnp: PnP ACPI: found 5 devices Dec 13 01:31:24.124887 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:31:24.124971 kernel: NET: Registered PF_INET protocol family Dec 13 01:31:24.124992 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:31:24.125011 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 01:31:24.125029 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:31:24.125053 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:31:24.125071 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 01:31:24.125090 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 01:31:24.125108 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:31:24.125143 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:31:24.125162 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:31:24.125181 kernel: NET: Registered PF_XDP protocol family Dec 13 01:31:24.125395 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:31:24.125534 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:31:24.125739 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:31:24.125878 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 01:31:24.126034 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 01:31:24.126057 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:31:24.126075 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 01:31:24.126093 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Dec 13 01:31:24.126109 kernel: clocksource: Switched to clocksource tsc Dec 13 01:31:24.130730 kernel: Initialise system trusted keyrings Dec 13 01:31:24.130767 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 01:31:24.130849 kernel: Key type asymmetric registered Dec 13 01:31:24.130865 kernel: Asymmetric key parser 'x509' registered Dec 13 01:31:24.130880 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:31:24.130896 kernel: io scheduler mq-deadline registered Dec 13 01:31:24.130912 kernel: io scheduler kyber registered Dec 13 01:31:24.130928 kernel: io scheduler bfq registered Dec 13 01:31:24.130944 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:31:24.130959 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:31:24.130979 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:31:24.130996 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:31:24.131011 kernel: i8042: Warning: Keylock active Dec 13 01:31:24.131029 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:31:24.131050 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:31:24.132074 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 01:31:24.132246 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 01:31:24.132400 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T01:31:23 UTC (1734053483) Dec 13 01:31:24.132615 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 01:31:24.132638 kernel: intel_pstate: CPU model not supported Dec 13 01:31:24.132654 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:31:24.132670 kernel: Segment Routing with IPv6 Dec 13 01:31:24.132684 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:31:24.132700 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:31:24.132718 kernel: Key type dns_resolver registered Dec 13 01:31:24.132735 kernel: IPI shorthand broadcast: enabled Dec 13 01:31:24.132750 kernel: sched_clock: Marking stable (638003546, 347895739)->(1100747406, -114848121) Dec 13 01:31:24.132771 kernel: registered taskstats version 1 Dec 13 01:31:24.132787 kernel: Loading compiled-in X.509 certificates Dec 13 01:31:24.132804 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:31:24.132820 kernel: Key type .fscrypt registered Dec 13 01:31:24.132835 kernel: Key type fscrypt-provisioning registered Dec 13 01:31:24.132851 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:31:24.132867 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:31:24.132881 kernel: ima: No architecture policies found Dec 13 01:31:24.132900 kernel: clk: Disabling unused clocks Dec 13 01:31:24.132916 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:31:24.132933 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:31:24.132951 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:31:24.132968 kernel: Run /init as init process Dec 13 01:31:24.132985 kernel: with arguments: Dec 13 01:31:24.133002 kernel: /init Dec 13 01:31:24.133016 kernel: with environment: Dec 13 01:31:24.133031 kernel: HOME=/ Dec 13 01:31:24.133047 kernel: TERM=linux Dec 13 01:31:24.133067 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:31:24.133117 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:31:24.133164 systemd[1]: Detected virtualization amazon. Dec 13 01:31:24.133181 systemd[1]: Detected architecture x86-64. Dec 13 01:31:24.133198 systemd[1]: Running in initrd. Dec 13 01:31:24.133214 systemd[1]: No hostname configured, using default hostname. Dec 13 01:31:24.133236 systemd[1]: Hostname set to . Dec 13 01:31:24.133329 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:31:24.133359 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:31:24.133383 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:31:24.133406 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:31:24.133428 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:31:24.133445 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:31:24.133462 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:31:24.133486 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:31:24.133506 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:31:24.133525 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:31:24.133544 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:31:24.133562 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:31:24.133580 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:31:24.133599 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:31:24.133621 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:31:24.133639 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:31:24.133657 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:31:24.133676 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:31:24.133694 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:31:24.133713 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:31:24.133731 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:31:24.133750 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:31:24.133769 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:31:24.133791 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:31:24.133809 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:31:24.133827 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:31:24.133845 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:31:24.133864 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:31:24.133889 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:31:24.133908 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:31:24.133927 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:31:24.133978 systemd-journald[178]: Collecting audit messages is disabled. Dec 13 01:31:24.134023 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:31:24.134042 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:31:24.134061 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:31:24.134079 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:31:24.134100 systemd-journald[178]: Journal started Dec 13 01:31:24.134155 systemd-journald[178]: Runtime Journal (/run/log/journal/ec214513bd5df29a0f396236c68630b3) is 4.8M, max 38.6M, 33.7M free. Dec 13 01:31:24.141195 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:31:24.126176 systemd-modules-load[179]: Inserted module 'overlay' Dec 13 01:31:24.316915 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:31:24.316960 kernel: Bridge firewalling registered Dec 13 01:31:24.191946 systemd-modules-load[179]: Inserted module 'br_netfilter' Dec 13 01:31:24.323625 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:31:24.329262 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:31:24.331307 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:31:24.346376 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:31:24.352779 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:31:24.359829 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:31:24.364184 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:31:24.372344 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:31:24.393154 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:31:24.408519 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:31:24.411367 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:31:24.424438 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:31:24.429539 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:31:24.439397 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:31:24.468111 dracut-cmdline[212]: dracut-dracut-053 Dec 13 01:31:24.476727 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:31:24.552185 systemd-resolved[214]: Positive Trust Anchors: Dec 13 01:31:24.552205 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:31:24.552267 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:31:24.573339 systemd-resolved[214]: Defaulting to hostname 'linux'. Dec 13 01:31:24.576114 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:31:24.579452 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:31:24.642163 kernel: SCSI subsystem initialized Dec 13 01:31:24.653162 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:31:24.666175 kernel: iscsi: registered transport (tcp) Dec 13 01:31:24.692899 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:31:24.692985 kernel: QLogic iSCSI HBA Driver Dec 13 01:31:24.747431 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:31:24.757550 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:31:24.819164 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:31:24.819266 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:31:24.821357 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:31:24.892180 kernel: raid6: avx512x4 gen() 9593 MB/s Dec 13 01:31:24.910173 kernel: raid6: avx512x2 gen() 8267 MB/s Dec 13 01:31:24.928287 kernel: raid6: avx512x1 gen() 9594 MB/s Dec 13 01:31:24.945341 kernel: raid6: avx2x4 gen() 10258 MB/s Dec 13 01:31:24.962177 kernel: raid6: avx2x2 gen() 9858 MB/s Dec 13 01:31:24.980325 kernel: raid6: avx2x1 gen() 8166 MB/s Dec 13 01:31:24.980406 kernel: raid6: using algorithm avx2x4 gen() 10258 MB/s Dec 13 01:31:24.999880 kernel: raid6: .... xor() 3309 MB/s, rmw enabled Dec 13 01:31:24.999965 kernel: raid6: using avx512x2 recovery algorithm Dec 13 01:31:25.065174 kernel: xor: automatically using best checksumming function avx Dec 13 01:31:25.458176 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:31:25.481981 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:31:25.491860 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:31:25.524592 systemd-udevd[398]: Using default interface naming scheme 'v255'. Dec 13 01:31:25.533014 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:31:25.543595 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:31:25.624446 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Dec 13 01:31:25.668304 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:31:25.676769 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:31:25.735515 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:31:25.748429 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:31:25.796954 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:31:25.804182 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:31:25.807760 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:31:25.810962 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:31:25.820511 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:31:25.857695 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:31:25.879268 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:31:25.879337 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 01:31:25.916871 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 01:31:25.917277 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Dec 13 01:31:25.917485 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:31:25.917510 kernel: AES CTR mode by8 optimization enabled Dec 13 01:31:25.917534 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:9f:fc:26:3d:69 Dec 13 01:31:25.916256 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:31:25.916423 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:31:25.918541 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:31:25.922493 (udev-worker)[448]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:31:25.922847 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:31:25.923084 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:31:25.927714 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:31:25.940467 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:31:25.964186 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 01:31:25.968344 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 01:31:25.983640 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 01:31:25.989171 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:31:25.989243 kernel: GPT:9289727 != 16777215 Dec 13 01:31:25.989266 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:31:25.989289 kernel: GPT:9289727 != 16777215 Dec 13 01:31:25.989419 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:31:25.989445 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:31:26.134895 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Dec 13 01:31:26.177971 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (459) Dec 13 01:31:26.195157 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/nvme0n1p3 scanned by (udev-worker) (447) Dec 13 01:31:26.196096 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:31:26.216614 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:31:26.277652 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 01:31:26.285557 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:31:26.305804 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Dec 13 01:31:26.322645 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Dec 13 01:31:26.322807 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Dec 13 01:31:26.339464 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:31:26.352471 disk-uuid[630]: Primary Header is updated. Dec 13 01:31:26.352471 disk-uuid[630]: Secondary Entries is updated. Dec 13 01:31:26.352471 disk-uuid[630]: Secondary Header is updated. Dec 13 01:31:26.360151 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:31:26.368169 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:31:26.389172 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:31:27.383183 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:31:27.383795 disk-uuid[631]: The operation has completed successfully. Dec 13 01:31:27.554957 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:31:27.555116 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:31:27.592672 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:31:27.611008 sh[972]: Success Dec 13 01:31:27.626603 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 01:31:27.765318 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:31:27.779343 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:31:27.782918 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:31:27.824930 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:31:27.825032 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:31:27.825059 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:31:27.826205 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:31:27.828076 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:31:27.984223 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:31:27.987313 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:31:27.998080 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:31:28.013433 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:31:28.019450 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:31:28.044089 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:31:28.044185 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:31:28.044211 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:31:28.050156 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:31:28.066203 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:31:28.066837 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:31:28.104927 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:31:28.115602 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:31:28.179059 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:31:28.187310 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:31:28.231978 systemd-networkd[1164]: lo: Link UP Dec 13 01:31:28.232238 systemd-networkd[1164]: lo: Gained carrier Dec 13 01:31:28.236204 systemd-networkd[1164]: Enumeration completed Dec 13 01:31:28.236805 systemd-networkd[1164]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:31:28.236812 systemd-networkd[1164]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:31:28.236957 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:31:28.238513 systemd[1]: Reached target network.target - Network. Dec 13 01:31:28.254659 systemd-networkd[1164]: eth0: Link UP Dec 13 01:31:28.254670 systemd-networkd[1164]: eth0: Gained carrier Dec 13 01:31:28.254688 systemd-networkd[1164]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:31:28.271309 systemd-networkd[1164]: eth0: DHCPv4 address 172.31.29.36/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 01:31:28.703112 ignition[1095]: Ignition 2.19.0 Dec 13 01:31:28.703152 ignition[1095]: Stage: fetch-offline Dec 13 01:31:28.703445 ignition[1095]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:31:28.703459 ignition[1095]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:31:28.703801 ignition[1095]: Ignition finished successfully Dec 13 01:31:28.712241 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:31:28.727589 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:31:28.746021 ignition[1173]: Ignition 2.19.0 Dec 13 01:31:28.746036 ignition[1173]: Stage: fetch Dec 13 01:31:28.746543 ignition[1173]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:31:28.746561 ignition[1173]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:31:28.746692 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:31:28.813597 ignition[1173]: PUT result: OK Dec 13 01:31:28.837513 ignition[1173]: parsed url from cmdline: "" Dec 13 01:31:28.837527 ignition[1173]: no config URL provided Dec 13 01:31:28.837557 ignition[1173]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:31:28.837578 ignition[1173]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:31:28.837606 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:31:28.845031 ignition[1173]: PUT result: OK Dec 13 01:31:28.845243 ignition[1173]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 01:31:28.854244 ignition[1173]: GET result: OK Dec 13 01:31:28.864082 ignition[1173]: parsing config with SHA512: 16456a633ba76b0075761c521f8ba13ad11ca9257eb99c38f51812d137ea3e2f6e2dcc565ee8fafd261b7e0dfcb0033eb8cad487ad33d3a70790c175d6e629a4 Dec 13 01:31:28.872405 unknown[1173]: fetched base config from "system" Dec 13 01:31:28.873790 ignition[1173]: fetch: fetch complete Dec 13 01:31:28.872431 unknown[1173]: fetched base config from "system" Dec 13 01:31:28.873798 ignition[1173]: fetch: fetch passed Dec 13 01:31:28.872446 unknown[1173]: fetched user config from "aws" Dec 13 01:31:28.873874 ignition[1173]: Ignition finished successfully Dec 13 01:31:28.878052 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:31:28.891461 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:31:28.912171 ignition[1179]: Ignition 2.19.0 Dec 13 01:31:28.912186 ignition[1179]: Stage: kargs Dec 13 01:31:28.912844 ignition[1179]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:31:28.912861 ignition[1179]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:31:28.913002 ignition[1179]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:31:28.914558 ignition[1179]: PUT result: OK Dec 13 01:31:28.923543 ignition[1179]: kargs: kargs passed Dec 13 01:31:28.923794 ignition[1179]: Ignition finished successfully Dec 13 01:31:28.926931 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:31:28.932509 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:31:28.960357 ignition[1185]: Ignition 2.19.0 Dec 13 01:31:28.960372 ignition[1185]: Stage: disks Dec 13 01:31:28.961037 ignition[1185]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:31:28.961053 ignition[1185]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:31:28.961222 ignition[1185]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:31:28.964278 ignition[1185]: PUT result: OK Dec 13 01:31:28.970252 ignition[1185]: disks: disks passed Dec 13 01:31:28.970348 ignition[1185]: Ignition finished successfully Dec 13 01:31:28.975649 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:31:28.976596 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:31:28.980214 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:31:28.982269 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:31:28.989474 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:31:28.990939 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:31:29.001410 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:31:29.052015 systemd-fsck[1193]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:31:29.056169 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:31:29.064267 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:31:29.233150 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:31:29.234821 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:31:29.237783 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:31:29.264355 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:31:29.274409 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:31:29.277753 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:31:29.280391 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:31:29.280429 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:31:29.293532 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:31:29.301787 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1212) Dec 13 01:31:29.314253 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:31:29.314330 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:31:29.314355 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:31:29.318538 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:31:29.326805 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:31:29.323933 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:31:29.362288 systemd-networkd[1164]: eth0: Gained IPv6LL Dec 13 01:31:29.956482 initrd-setup-root[1236]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:31:29.987242 initrd-setup-root[1243]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:31:29.995278 initrd-setup-root[1250]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:31:30.002658 initrd-setup-root[1257]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:31:30.350214 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:31:30.359314 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:31:30.370674 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:31:30.386837 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:31:30.388874 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:31:30.425788 ignition[1324]: INFO : Ignition 2.19.0 Dec 13 01:31:30.425788 ignition[1324]: INFO : Stage: mount Dec 13 01:31:30.429978 ignition[1324]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:31:30.429978 ignition[1324]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:31:30.433613 ignition[1324]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:31:30.443520 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:31:30.450229 ignition[1324]: INFO : PUT result: OK Dec 13 01:31:30.458640 ignition[1324]: INFO : mount: mount passed Dec 13 01:31:30.460338 ignition[1324]: INFO : Ignition finished successfully Dec 13 01:31:30.465067 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:31:30.482362 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:31:30.525388 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:31:30.568093 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1336) Dec 13 01:31:30.568455 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:31:30.571307 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:31:30.571381 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:31:30.578335 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:31:30.582100 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:31:30.638705 ignition[1353]: INFO : Ignition 2.19.0 Dec 13 01:31:30.638705 ignition[1353]: INFO : Stage: files Dec 13 01:31:30.642411 ignition[1353]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:31:30.642411 ignition[1353]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:31:30.642411 ignition[1353]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:31:30.650279 ignition[1353]: INFO : PUT result: OK Dec 13 01:31:30.655643 ignition[1353]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:31:30.675147 ignition[1353]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:31:30.675147 ignition[1353]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:31:30.700976 ignition[1353]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:31:30.702963 ignition[1353]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:31:30.702963 ignition[1353]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:31:30.701526 unknown[1353]: wrote ssh authorized keys file for user: core Dec 13 01:31:30.710241 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:31:30.710241 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:31:30.807758 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:31:30.965990 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:31:30.965990 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:31:30.973571 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 01:31:31.439062 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:31:31.581149 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:31:31.583713 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:31:31.583713 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:31:31.583713 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:31:31.595982 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:31:31.595982 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:31:31.595982 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:31:31.595982 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:31:31.595982 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:31:31.595982 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:31:31.595982 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:31:31.595982 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:31:31.595982 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:31:31.595982 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:31:31.595982 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 01:31:32.006479 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 01:31:32.281759 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:31:32.281759 ignition[1353]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 01:31:32.286476 ignition[1353]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:31:32.286476 ignition[1353]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:31:32.286476 ignition[1353]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 01:31:32.286476 ignition[1353]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:31:32.286476 ignition[1353]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:31:32.286476 ignition[1353]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:31:32.286476 ignition[1353]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:31:32.286476 ignition[1353]: INFO : files: files passed Dec 13 01:31:32.286476 ignition[1353]: INFO : Ignition finished successfully Dec 13 01:31:32.310913 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:31:32.316355 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:31:32.324594 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:31:32.329687 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:31:32.331262 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:31:32.348094 initrd-setup-root-after-ignition[1382]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:31:32.348094 initrd-setup-root-after-ignition[1382]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:31:32.353824 initrd-setup-root-after-ignition[1385]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:31:32.357402 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:31:32.360515 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:31:32.368315 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:31:32.412825 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:31:32.412968 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:31:32.418412 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:31:32.422293 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:31:32.425815 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:31:32.432413 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:31:32.463758 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:31:32.475350 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:31:32.490825 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:31:32.491022 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:31:32.491551 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:31:32.491857 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:31:32.492005 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:31:32.492670 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:31:32.493482 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:31:32.493966 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:31:32.494654 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:31:32.495325 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:31:32.495911 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:31:32.496098 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:31:32.496568 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:31:32.497011 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:31:32.497393 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:31:32.497647 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:31:32.497886 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:31:32.499482 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:31:32.500391 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:31:32.500853 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:31:32.518461 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:31:32.521548 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:31:32.521684 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:31:32.534761 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:31:32.535041 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:31:32.539560 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:31:32.539682 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:31:32.558531 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:31:32.569116 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:31:32.590452 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:31:32.592073 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:31:32.597112 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:31:32.598913 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:31:32.615423 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:31:32.615643 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:31:32.626465 ignition[1406]: INFO : Ignition 2.19.0 Dec 13 01:31:32.626465 ignition[1406]: INFO : Stage: umount Dec 13 01:31:32.629416 ignition[1406]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:31:32.629416 ignition[1406]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:31:32.629416 ignition[1406]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:31:32.633892 ignition[1406]: INFO : PUT result: OK Dec 13 01:31:32.637478 ignition[1406]: INFO : umount: umount passed Dec 13 01:31:32.638773 ignition[1406]: INFO : Ignition finished successfully Dec 13 01:31:32.642863 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:31:32.643009 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:31:32.648556 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:31:32.648691 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:31:32.655422 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:31:32.656843 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:31:32.658400 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:31:32.660257 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:31:32.665335 systemd[1]: Stopped target network.target - Network. Dec 13 01:31:32.666844 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:31:32.666914 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:31:32.669173 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:31:32.670628 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:31:32.674774 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:31:32.688936 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:31:32.690955 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:31:32.693350 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:31:32.693426 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:31:32.695061 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:31:32.695954 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:31:32.699322 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:31:32.699479 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:31:32.703200 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:31:32.703271 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:31:32.709094 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:31:32.715040 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:31:32.717190 systemd-networkd[1164]: eth0: DHCPv6 lease lost Dec 13 01:31:32.729109 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:31:32.730765 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:31:32.732037 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:31:32.735442 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:31:32.735655 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:31:32.739584 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:31:32.740847 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:31:32.746866 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:31:32.747044 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:31:32.749538 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:31:32.749615 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:31:32.757328 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:31:32.758478 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:31:32.758565 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:31:32.760471 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:31:32.760542 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:31:32.762036 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:31:32.762097 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:31:32.774723 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:31:32.774823 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:31:32.777008 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:31:32.801917 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:31:32.802335 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:31:32.813871 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:31:32.813974 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:31:32.816797 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:31:32.816864 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:31:32.820699 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:31:32.820773 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:31:32.830229 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:31:32.830356 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:31:32.837441 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:31:32.837509 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:31:32.855365 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:31:32.857044 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:31:32.857203 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:31:32.865463 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:31:32.865520 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:31:32.867427 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:31:32.867479 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:31:32.873616 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:31:32.873680 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:31:32.880481 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:31:32.880583 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:31:32.883867 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:31:32.883951 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:31:32.888308 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:31:32.905509 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:31:32.917194 systemd[1]: Switching root. Dec 13 01:31:32.957195 systemd-journald[178]: Journal stopped Dec 13 01:31:37.079344 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Dec 13 01:31:37.079469 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:31:37.079499 kernel: SELinux: policy capability open_perms=1 Dec 13 01:31:37.079521 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:31:37.079544 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:31:37.079566 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:31:37.079589 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:31:37.079610 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:31:37.079762 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:31:37.079800 kernel: audit: type=1403 audit(1734053495.108:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:31:37.079831 systemd[1]: Successfully loaded SELinux policy in 92.340ms. Dec 13 01:31:37.079880 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.138ms. Dec 13 01:31:37.079914 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:31:37.079940 systemd[1]: Detected virtualization amazon. Dec 13 01:31:37.079966 systemd[1]: Detected architecture x86-64. Dec 13 01:31:37.079989 systemd[1]: Detected first boot. Dec 13 01:31:37.080017 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:31:37.080098 zram_generator::config[1448]: No configuration found. Dec 13 01:31:37.080202 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:31:37.080229 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:31:37.080269 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:31:37.080295 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:31:37.080322 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:31:37.080347 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:31:37.080372 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:31:37.080397 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:31:37.080422 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:31:37.080451 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:31:37.080475 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:31:37.080499 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:31:37.080523 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:31:37.080547 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:31:37.080572 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:31:37.080596 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:31:37.080620 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:31:37.080648 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:31:37.080672 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:31:37.080697 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:31:37.080721 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:31:37.080746 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:31:37.080771 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:31:37.080796 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:31:37.080821 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:31:37.080848 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:31:37.080872 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:31:37.080895 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:31:37.080918 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:31:37.080944 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:31:37.080966 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:31:37.080988 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:31:37.081006 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:31:37.081024 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:31:37.081082 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:31:37.081106 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:31:37.086786 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:31:37.086844 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:31:37.086944 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:31:37.086974 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:31:37.086999 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:31:37.087025 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:31:37.087051 systemd[1]: Reached target machines.target - Containers. Dec 13 01:31:37.087085 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:31:37.087111 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:31:37.087152 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:31:37.087177 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:31:37.087285 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:31:37.087313 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:31:37.087337 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:31:37.087362 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:31:37.087391 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:31:37.087417 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:31:37.087441 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:31:37.087465 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:31:37.087490 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:31:37.087513 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:31:37.087538 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:31:37.087563 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:31:37.087587 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:31:37.088975 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:31:37.089023 kernel: loop: module loaded Dec 13 01:31:37.089140 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:31:37.089165 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:31:37.089184 systemd[1]: Stopped verity-setup.service. Dec 13 01:31:37.089203 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:31:37.089221 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:31:37.089239 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:31:37.089258 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:31:37.089283 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:31:37.089301 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:31:37.101675 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:31:37.101748 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:31:37.101776 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:31:37.101812 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:31:37.101837 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:31:37.102374 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:31:37.102408 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:31:37.102433 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:31:37.102451 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:31:37.102469 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:31:37.102493 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:31:37.102511 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:31:37.102531 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:31:37.102553 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:31:37.102572 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:31:37.102595 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:31:37.102620 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:31:37.102742 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:31:37.102770 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:31:37.102795 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:31:37.102820 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:31:37.102844 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:31:37.102869 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:31:37.102894 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:31:37.102920 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:31:37.102993 systemd-journald[1527]: Collecting audit messages is disabled. Dec 13 01:31:37.103038 kernel: fuse: init (API version 7.39) Dec 13 01:31:37.103062 kernel: ACPI: bus type drm_connector registered Dec 13 01:31:37.103085 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:31:37.103110 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:31:37.103160 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:31:37.103185 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:31:37.103461 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:31:37.103487 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:31:37.103511 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:31:37.103535 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:31:37.103559 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:31:37.103590 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:31:37.103615 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:31:37.103641 systemd-journald[1527]: Journal started Dec 13 01:31:37.103803 systemd-journald[1527]: Runtime Journal (/run/log/journal/ec214513bd5df29a0f396236c68630b3) is 4.8M, max 38.6M, 33.7M free. Dec 13 01:31:36.332597 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:31:36.366644 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 13 01:31:36.367199 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:31:37.112402 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:31:37.151721 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:31:37.151822 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:31:37.130756 systemd-tmpfiles[1541]: ACLs are not supported, ignoring. Dec 13 01:31:37.130785 systemd-tmpfiles[1541]: ACLs are not supported, ignoring. Dec 13 01:31:37.165276 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:31:37.167684 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:31:37.169945 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:31:37.172918 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:31:37.211981 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:31:37.214068 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:31:37.228244 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:31:37.231796 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:31:37.240203 kernel: loop0: detected capacity change from 0 to 142488 Dec 13 01:31:37.234161 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:31:37.242501 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:31:37.257433 systemd-journald[1527]: Time spent on flushing to /var/log/journal/ec214513bd5df29a0f396236c68630b3 is 54.938ms for 979 entries. Dec 13 01:31:37.257433 systemd-journald[1527]: System Journal (/var/log/journal/ec214513bd5df29a0f396236c68630b3) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:31:37.328905 systemd-journald[1527]: Received client request to flush runtime journal. Dec 13 01:31:37.295402 udevadm[1589]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:31:37.332095 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:31:37.426227 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:31:37.434606 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:31:37.438199 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:31:37.468661 systemd-tmpfiles[1597]: ACLs are not supported, ignoring. Dec 13 01:31:37.468695 systemd-tmpfiles[1597]: ACLs are not supported, ignoring. Dec 13 01:31:37.477175 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:31:37.482600 kernel: loop1: detected capacity change from 0 to 140768 Dec 13 01:31:37.638161 kernel: loop2: detected capacity change from 0 to 61336 Dec 13 01:31:37.839179 kernel: loop3: detected capacity change from 0 to 211296 Dec 13 01:31:37.880163 kernel: loop4: detected capacity change from 0 to 142488 Dec 13 01:31:37.914164 kernel: loop5: detected capacity change from 0 to 140768 Dec 13 01:31:37.959825 kernel: loop6: detected capacity change from 0 to 61336 Dec 13 01:31:37.983164 kernel: loop7: detected capacity change from 0 to 211296 Dec 13 01:31:38.005548 (sd-merge)[1604]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Dec 13 01:31:38.008046 (sd-merge)[1604]: Merged extensions into '/usr'. Dec 13 01:31:38.013766 systemd[1]: Reloading requested from client PID 1557 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:31:38.013908 systemd[1]: Reloading... Dec 13 01:31:38.192157 zram_generator::config[1630]: No configuration found. Dec 13 01:31:38.450465 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:31:38.513094 systemd[1]: Reloading finished in 497 ms. Dec 13 01:31:38.543949 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:31:38.557342 systemd[1]: Starting ensure-sysext.service... Dec 13 01:31:38.561504 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:31:38.585317 systemd[1]: Reloading requested from client PID 1678 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:31:38.585338 systemd[1]: Reloading... Dec 13 01:31:38.594481 systemd-tmpfiles[1679]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:31:38.594993 systemd-tmpfiles[1679]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:31:38.596501 systemd-tmpfiles[1679]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:31:38.597218 systemd-tmpfiles[1679]: ACLs are not supported, ignoring. Dec 13 01:31:38.597308 systemd-tmpfiles[1679]: ACLs are not supported, ignoring. Dec 13 01:31:38.607424 systemd-tmpfiles[1679]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:31:38.607440 systemd-tmpfiles[1679]: Skipping /boot Dec 13 01:31:38.632660 systemd-tmpfiles[1679]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:31:38.632677 systemd-tmpfiles[1679]: Skipping /boot Dec 13 01:31:38.710151 zram_generator::config[1707]: No configuration found. Dec 13 01:31:38.872085 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:31:38.933058 systemd[1]: Reloading finished in 347 ms. Dec 13 01:31:38.950251 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:31:38.957993 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:31:38.991880 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:31:39.001836 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:31:39.015433 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:31:39.038862 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:31:39.047219 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:31:39.063716 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:31:39.087004 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:31:39.087325 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:31:39.101194 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:31:39.107648 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:31:39.113752 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:31:39.118623 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:31:39.118834 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:31:39.125818 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:31:39.126209 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:31:39.126599 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:31:39.139041 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:31:39.141544 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:31:39.156042 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:31:39.156519 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:31:39.174780 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:31:39.175194 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:31:39.187727 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:31:39.197003 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:31:39.202839 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:31:39.203231 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:31:39.207170 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:31:39.208805 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:31:39.209071 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:31:39.225557 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:31:39.228351 systemd[1]: Finished ensure-sysext.service. Dec 13 01:31:39.242930 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:31:39.262230 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:31:39.267098 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:31:39.269063 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:31:39.286281 ldconfig[1546]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:31:39.306196 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:31:39.306447 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:31:39.308784 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:31:39.308992 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:31:39.310833 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:31:39.312659 systemd-udevd[1767]: Using default interface naming scheme 'v255'. Dec 13 01:31:39.327117 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:31:39.329810 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:31:39.338834 augenrules[1798]: No rules Dec 13 01:31:39.339474 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:31:39.341447 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:31:39.384225 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:31:39.419767 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:31:39.434366 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:31:39.454655 systemd-resolved[1763]: Positive Trust Anchors: Dec 13 01:31:39.454671 systemd-resolved[1763]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:31:39.454727 systemd-resolved[1763]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:31:39.468798 systemd-resolved[1763]: Defaulting to hostname 'linux'. Dec 13 01:31:39.480298 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:31:39.482911 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:31:39.487307 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:31:39.493501 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:31:39.603350 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 01:31:39.606038 systemd-networkd[1809]: lo: Link UP Dec 13 01:31:39.606552 systemd-networkd[1809]: lo: Gained carrier Dec 13 01:31:39.607318 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1820) Dec 13 01:31:39.608364 systemd-networkd[1809]: Enumeration completed Dec 13 01:31:39.608610 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:31:39.610738 systemd[1]: Reached target network.target - Network. Dec 13 01:31:39.621408 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:31:39.627306 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1820) Dec 13 01:31:39.632446 (udev-worker)[1808]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:31:39.705177 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 01:31:39.707161 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Dec 13 01:31:39.731161 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Dec 13 01:31:39.734483 systemd-networkd[1809]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:31:39.735081 systemd-networkd[1809]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:31:39.743276 systemd-networkd[1809]: eth0: Link UP Dec 13 01:31:39.748885 systemd-networkd[1809]: eth0: Gained carrier Dec 13 01:31:39.748926 systemd-networkd[1809]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:31:39.756337 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:31:39.756414 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Dec 13 01:31:39.760246 systemd-networkd[1809]: eth0: DHCPv4 address 172.31.29.36/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 01:31:39.767226 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 01:31:39.793167 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:31:39.800386 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:31:39.814167 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1811) Dec 13 01:31:40.015965 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 01:31:40.016557 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:31:40.028514 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:31:40.217586 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:31:40.228110 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:31:40.238028 lvm[1925]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:31:40.273996 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:31:40.280508 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:31:40.282621 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:31:40.284609 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:31:40.286682 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:31:40.288768 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:31:40.290703 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:31:40.292773 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:31:40.294428 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:31:40.296399 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:31:40.296441 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:31:40.297743 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:31:40.301873 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:31:40.306773 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:31:40.316567 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:31:40.337449 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:31:40.340478 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:31:40.342547 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:31:40.344340 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:31:40.346037 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:31:40.346074 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:31:40.349155 lvm[1933]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:31:40.355368 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:31:40.364678 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 01:31:40.370345 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:31:40.374263 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:31:40.378325 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:31:40.380430 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:31:40.384300 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:31:40.388386 systemd[1]: Started ntpd.service - Network Time Service. Dec 13 01:31:40.400437 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:31:40.409325 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 13 01:31:40.422416 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:31:40.443781 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:31:40.455399 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:31:40.457436 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:31:40.458934 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:31:40.469347 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:31:40.471373 jq[1937]: false Dec 13 01:31:40.496448 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:31:40.500525 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:31:40.504751 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:31:40.506223 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:31:40.519911 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:31:40.521216 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:31:40.532024 jq[1951]: true Dec 13 01:31:40.579173 jq[1956]: true Dec 13 01:31:40.598535 update_engine[1948]: I20241213 01:31:40.591317 1948 main.cc:92] Flatcar Update Engine starting Dec 13 01:31:40.604671 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:31:40.643909 (ntainerd)[1957]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:31:40.644661 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:31:40.649523 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:31:40.687073 extend-filesystems[1938]: Found loop4 Dec 13 01:31:40.688959 extend-filesystems[1938]: Found loop5 Dec 13 01:31:40.690217 extend-filesystems[1938]: Found loop6 Dec 13 01:31:40.691559 extend-filesystems[1938]: Found loop7 Dec 13 01:31:40.692619 extend-filesystems[1938]: Found nvme0n1 Dec 13 01:31:40.693980 extend-filesystems[1938]: Found nvme0n1p1 Dec 13 01:31:40.695199 extend-filesystems[1938]: Found nvme0n1p2 Dec 13 01:31:40.701889 tar[1953]: linux-amd64/helm Dec 13 01:31:40.702384 extend-filesystems[1938]: Found nvme0n1p3 Dec 13 01:31:40.705486 extend-filesystems[1938]: Found usr Dec 13 01:31:40.710668 extend-filesystems[1938]: Found nvme0n1p4 Dec 13 01:31:40.710668 extend-filesystems[1938]: Found nvme0n1p6 Dec 13 01:31:40.710668 extend-filesystems[1938]: Found nvme0n1p7 Dec 13 01:31:40.710668 extend-filesystems[1938]: Found nvme0n1p9 Dec 13 01:31:40.710668 extend-filesystems[1938]: Checking size of /dev/nvme0n1p9 Dec 13 01:31:40.722918 dbus-daemon[1936]: [system] SELinux support is enabled Dec 13 01:31:40.729827 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:31:40.738633 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 13 01:31:40.745430 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:31:40.745597 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:31:40.748636 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:31:40.748664 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:31:40.753632 dbus-daemon[1936]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1809 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 01:31:40.769626 dbus-daemon[1936]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 01:31:40.774557 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:31:40.779180 update_engine[1948]: I20241213 01:31:40.778952 1948 update_check_scheduler.cc:74] Next update check in 7m45s Dec 13 01:31:40.786372 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 13 01:31:40.799266 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:31:40.809343 ntpd[1940]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:36:14 UTC 2024 (1): Starting Dec 13 01:31:40.814411 ntpd[1940]: 13 Dec 01:31:40 ntpd[1940]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:36:14 UTC 2024 (1): Starting Dec 13 01:31:40.814411 ntpd[1940]: 13 Dec 01:31:40 ntpd[1940]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:31:40.814411 ntpd[1940]: 13 Dec 01:31:40 ntpd[1940]: ---------------------------------------------------- Dec 13 01:31:40.814411 ntpd[1940]: 13 Dec 01:31:40 ntpd[1940]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:31:40.814411 ntpd[1940]: 13 Dec 01:31:40 ntpd[1940]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:31:40.814411 ntpd[1940]: 13 Dec 01:31:40 ntpd[1940]: corporation. Support and training for ntp-4 are Dec 13 01:31:40.814411 ntpd[1940]: 13 Dec 01:31:40 ntpd[1940]: available at https://www.nwtime.org/support Dec 13 01:31:40.814411 ntpd[1940]: 13 Dec 01:31:40 ntpd[1940]: ---------------------------------------------------- Dec 13 01:31:40.809377 ntpd[1940]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:31:40.809388 ntpd[1940]: ---------------------------------------------------- Dec 13 01:31:40.819357 ntpd[1940]: 13 Dec 01:31:40 ntpd[1940]: proto: precision = 0.070 usec (-24) Dec 13 01:31:40.809397 ntpd[1940]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:31:40.809407 ntpd[1940]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:31:40.809416 ntpd[1940]: corporation. Support and training for ntp-4 are Dec 13 01:31:40.809426 ntpd[1940]: available at https://www.nwtime.org/support Dec 13 01:31:40.809435 ntpd[1940]: ---------------------------------------------------- Dec 13 01:31:40.816385 ntpd[1940]: proto: precision = 0.070 usec (-24) Dec 13 01:31:40.824580 ntpd[1940]: basedate set to 2024-11-30 Dec 13 01:31:40.830236 ntpd[1940]: 13 Dec 01:31:40 ntpd[1940]: basedate set to 2024-11-30 Dec 13 01:31:40.830236 ntpd[1940]: 13 Dec 01:31:40 ntpd[1940]: gps base set to 2024-12-01 (week 2343) Dec 13 01:31:40.824683 ntpd[1940]: gps base set to 2024-12-01 (week 2343) Dec 13 01:31:40.838790 ntpd[1940]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:31:40.847461 extend-filesystems[1938]: Resized partition /dev/nvme0n1p9 Dec 13 01:31:40.840419 ntpd[1940]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:31:40.849327 ntpd[1940]: 13 Dec 01:31:40 ntpd[1940]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:31:40.849327 ntpd[1940]: 13 Dec 01:31:40 ntpd[1940]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:31:40.849327 ntpd[1940]: 13 Dec 01:31:40 ntpd[1940]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:31:40.849327 ntpd[1940]: 13 Dec 01:31:40 ntpd[1940]: Listen normally on 3 eth0 172.31.29.36:123 Dec 13 01:31:40.849327 ntpd[1940]: 13 Dec 01:31:40 ntpd[1940]: Listen normally on 4 lo [::1]:123 Dec 13 01:31:40.849327 ntpd[1940]: 13 Dec 01:31:40 ntpd[1940]: bind(21) AF_INET6 fe80::49f:fcff:fe26:3d69%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:31:40.849327 ntpd[1940]: 13 Dec 01:31:40 ntpd[1940]: unable to create socket on eth0 (5) for fe80::49f:fcff:fe26:3d69%2#123 Dec 13 01:31:40.849327 ntpd[1940]: 13 Dec 01:31:40 ntpd[1940]: failed to init interface for address fe80::49f:fcff:fe26:3d69%2 Dec 13 01:31:40.849327 ntpd[1940]: 13 Dec 01:31:40 ntpd[1940]: Listening on routing socket on fd #21 for interface updates Dec 13 01:31:40.840608 ntpd[1940]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:31:40.840641 ntpd[1940]: Listen normally on 3 eth0 172.31.29.36:123 Dec 13 01:31:40.840679 ntpd[1940]: Listen normally on 4 lo [::1]:123 Dec 13 01:31:40.840726 ntpd[1940]: bind(21) AF_INET6 fe80::49f:fcff:fe26:3d69%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:31:40.840746 ntpd[1940]: unable to create socket on eth0 (5) for fe80::49f:fcff:fe26:3d69%2#123 Dec 13 01:31:40.840761 ntpd[1940]: failed to init interface for address fe80::49f:fcff:fe26:3d69%2 Dec 13 01:31:40.840789 ntpd[1940]: Listening on routing socket on fd #21 for interface updates Dec 13 01:31:40.857412 ntpd[1940]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:31:40.859758 extend-filesystems[2007]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:31:40.864043 ntpd[1940]: 13 Dec 01:31:40 ntpd[1940]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:31:40.864043 ntpd[1940]: 13 Dec 01:31:40 ntpd[1940]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:31:40.857453 ntpd[1940]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:31:40.865003 coreos-metadata[1935]: Dec 13 01:31:40.864 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 01:31:40.871109 systemd-logind[1946]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 01:31:40.871356 systemd-logind[1946]: Watching system buttons on /dev/input/event3 (Sleep Button) Dec 13 01:31:40.871432 systemd-logind[1946]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:31:40.881617 bash[2002]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:31:40.881791 coreos-metadata[1935]: Dec 13 01:31:40.876 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Dec 13 01:31:40.881791 coreos-metadata[1935]: Dec 13 01:31:40.880 INFO Fetch successful Dec 13 01:31:40.881791 coreos-metadata[1935]: Dec 13 01:31:40.880 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Dec 13 01:31:40.876804 systemd-logind[1946]: New seat seat0. Dec 13 01:31:40.876910 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:31:40.886762 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 01:31:40.886847 coreos-metadata[1935]: Dec 13 01:31:40.882 INFO Fetch successful Dec 13 01:31:40.886847 coreos-metadata[1935]: Dec 13 01:31:40.882 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Dec 13 01:31:40.886847 coreos-metadata[1935]: Dec 13 01:31:40.886 INFO Fetch successful Dec 13 01:31:40.886847 coreos-metadata[1935]: Dec 13 01:31:40.886 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Dec 13 01:31:40.893715 coreos-metadata[1935]: Dec 13 01:31:40.890 INFO Fetch successful Dec 13 01:31:40.893715 coreos-metadata[1935]: Dec 13 01:31:40.890 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Dec 13 01:31:40.894415 coreos-metadata[1935]: Dec 13 01:31:40.894 INFO Fetch failed with 404: resource not found Dec 13 01:31:40.894415 coreos-metadata[1935]: Dec 13 01:31:40.894 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Dec 13 01:31:40.896066 coreos-metadata[1935]: Dec 13 01:31:40.895 INFO Fetch successful Dec 13 01:31:40.896066 coreos-metadata[1935]: Dec 13 01:31:40.896 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Dec 13 01:31:40.902209 coreos-metadata[1935]: Dec 13 01:31:40.899 INFO Fetch successful Dec 13 01:31:40.902209 coreos-metadata[1935]: Dec 13 01:31:40.899 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Dec 13 01:31:40.902209 coreos-metadata[1935]: Dec 13 01:31:40.901 INFO Fetch successful Dec 13 01:31:40.902209 coreos-metadata[1935]: Dec 13 01:31:40.901 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Dec 13 01:31:40.905327 systemd[1]: Starting sshkeys.service... Dec 13 01:31:40.906606 coreos-metadata[1935]: Dec 13 01:31:40.905 INFO Fetch successful Dec 13 01:31:40.906606 coreos-metadata[1935]: Dec 13 01:31:40.905 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Dec 13 01:31:40.906773 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:31:40.912182 coreos-metadata[1935]: Dec 13 01:31:40.910 INFO Fetch successful Dec 13 01:31:41.055460 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 01:31:41.066448 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 01:31:41.079822 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 01:31:41.105815 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 01:31:41.108357 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:31:41.134928 extend-filesystems[2007]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 01:31:41.134928 extend-filesystems[2007]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:31:41.134928 extend-filesystems[2007]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 01:31:41.147890 extend-filesystems[1938]: Resized filesystem in /dev/nvme0n1p9 Dec 13 01:31:41.143036 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:31:41.143330 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:31:41.213219 coreos-metadata[2015]: Dec 13 01:31:41.212 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 01:31:41.216221 coreos-metadata[2015]: Dec 13 01:31:41.213 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Dec 13 01:31:41.219358 coreos-metadata[2015]: Dec 13 01:31:41.217 INFO Fetch successful Dec 13 01:31:41.219358 coreos-metadata[2015]: Dec 13 01:31:41.217 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 01:31:41.232340 coreos-metadata[2015]: Dec 13 01:31:41.223 INFO Fetch successful Dec 13 01:31:41.232836 unknown[2015]: wrote ssh authorized keys file for user: core Dec 13 01:31:41.235317 dbus-daemon[1936]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 01:31:41.235514 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 13 01:31:41.253171 dbus-daemon[1936]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1999 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 01:31:41.261159 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1823) Dec 13 01:31:41.265347 systemd-networkd[1809]: eth0: Gained IPv6LL Dec 13 01:31:41.272626 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:31:41.281422 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:31:41.300347 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Dec 13 01:31:41.316409 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:41.328763 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:31:41.362460 systemd[1]: Starting polkit.service - Authorization Manager... Dec 13 01:31:41.479153 update-ssh-keys[2028]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:31:41.479217 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 01:31:41.490305 systemd[1]: Finished sshkeys.service. Dec 13 01:31:41.540719 locksmithd[2001]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:31:41.593241 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:31:41.603809 polkitd[2064]: Started polkitd version 121 Dec 13 01:31:41.643362 amazon-ssm-agent[2039]: Initializing new seelog logger Dec 13 01:31:41.644529 amazon-ssm-agent[2039]: New Seelog Logger Creation Complete Dec 13 01:31:41.644784 amazon-ssm-agent[2039]: 2024/12/13 01:31:41 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:31:41.644844 amazon-ssm-agent[2039]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:31:41.646725 amazon-ssm-agent[2039]: 2024/12/13 01:31:41 processing appconfig overrides Dec 13 01:31:41.647637 amazon-ssm-agent[2039]: 2024/12/13 01:31:41 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:31:41.647751 amazon-ssm-agent[2039]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:31:41.649068 amazon-ssm-agent[2039]: 2024/12/13 01:31:41 processing appconfig overrides Dec 13 01:31:41.649068 amazon-ssm-agent[2039]: 2024/12/13 01:31:41 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:31:41.649068 amazon-ssm-agent[2039]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:31:41.649068 amazon-ssm-agent[2039]: 2024/12/13 01:31:41 processing appconfig overrides Dec 13 01:31:41.649068 amazon-ssm-agent[2039]: 2024-12-13 01:31:41 INFO Proxy environment variables: Dec 13 01:31:41.658607 polkitd[2064]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 01:31:41.658698 polkitd[2064]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 01:31:41.660906 polkitd[2064]: Finished loading, compiling and executing 2 rules Dec 13 01:31:41.662796 dbus-daemon[1936]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 01:31:41.663035 systemd[1]: Started polkit.service - Authorization Manager. Dec 13 01:31:41.663659 polkitd[2064]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 01:31:41.694371 amazon-ssm-agent[2039]: 2024/12/13 01:31:41 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:31:41.694371 amazon-ssm-agent[2039]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:31:41.694634 amazon-ssm-agent[2039]: 2024/12/13 01:31:41 processing appconfig overrides Dec 13 01:31:41.702594 systemd-hostnamed[1999]: Hostname set to (transient) Dec 13 01:31:41.702596 systemd-resolved[1763]: System hostname changed to 'ip-172-31-29-36'. Dec 13 01:31:41.757238 amazon-ssm-agent[2039]: 2024-12-13 01:31:41 INFO https_proxy: Dec 13 01:31:41.854036 amazon-ssm-agent[2039]: 2024-12-13 01:31:41 INFO http_proxy: Dec 13 01:31:41.944887 sshd_keygen[1978]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:31:41.954192 amazon-ssm-agent[2039]: 2024-12-13 01:31:41 INFO no_proxy: Dec 13 01:31:42.013715 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:31:42.029801 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:31:42.043561 systemd[1]: Started sshd@0-172.31.29.36:22-139.178.68.195:38428.service - OpenSSH per-connection server daemon (139.178.68.195:38428). Dec 13 01:31:42.056483 amazon-ssm-agent[2039]: 2024-12-13 01:31:41 INFO Checking if agent identity type OnPrem can be assumed Dec 13 01:31:42.089245 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:31:42.089507 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:31:42.102571 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:31:42.121779 containerd[1957]: time="2024-12-13T01:31:42.121600049Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:31:42.154433 amazon-ssm-agent[2039]: 2024-12-13 01:31:41 INFO Checking if agent identity type EC2 can be assumed Dec 13 01:31:42.173558 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:31:42.186816 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:31:42.199555 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:31:42.202385 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:31:42.254150 amazon-ssm-agent[2039]: 2024-12-13 01:31:41 INFO Agent will take identity from EC2 Dec 13 01:31:42.318899 containerd[1957]: time="2024-12-13T01:31:42.318818648Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:31:42.323907 containerd[1957]: time="2024-12-13T01:31:42.323851520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:31:42.324089 containerd[1957]: time="2024-12-13T01:31:42.324070812Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:31:42.324212 containerd[1957]: time="2024-12-13T01:31:42.324196704Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:31:42.324457 containerd[1957]: time="2024-12-13T01:31:42.324439844Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:31:42.324537 containerd[1957]: time="2024-12-13T01:31:42.324524092Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:31:42.324665 containerd[1957]: time="2024-12-13T01:31:42.324640583Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:31:42.324743 containerd[1957]: time="2024-12-13T01:31:42.324729488Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:31:42.325029 containerd[1957]: time="2024-12-13T01:31:42.325007653Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:31:42.325591 containerd[1957]: time="2024-12-13T01:31:42.325087331Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:31:42.325591 containerd[1957]: time="2024-12-13T01:31:42.325114974Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:31:42.325591 containerd[1957]: time="2024-12-13T01:31:42.325159303Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:31:42.325591 containerd[1957]: time="2024-12-13T01:31:42.325266769Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:31:42.325591 containerd[1957]: time="2024-12-13T01:31:42.325549920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:31:42.325975 containerd[1957]: time="2024-12-13T01:31:42.325952192Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:31:42.326050 containerd[1957]: time="2024-12-13T01:31:42.326032988Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:31:42.326249 containerd[1957]: time="2024-12-13T01:31:42.326227146Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:31:42.326404 containerd[1957]: time="2024-12-13T01:31:42.326386495Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:31:42.342073 containerd[1957]: time="2024-12-13T01:31:42.338751634Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:31:42.342073 containerd[1957]: time="2024-12-13T01:31:42.338904529Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:31:42.342073 containerd[1957]: time="2024-12-13T01:31:42.339265009Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:31:42.342073 containerd[1957]: time="2024-12-13T01:31:42.339310549Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:31:42.342073 containerd[1957]: time="2024-12-13T01:31:42.339388819Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:31:42.342073 containerd[1957]: time="2024-12-13T01:31:42.339598286Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:31:42.342073 containerd[1957]: time="2024-12-13T01:31:42.340676558Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:31:42.342073 containerd[1957]: time="2024-12-13T01:31:42.340859924Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:31:42.342073 containerd[1957]: time="2024-12-13T01:31:42.340887524Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:31:42.342073 containerd[1957]: time="2024-12-13T01:31:42.340916435Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:31:42.342073 containerd[1957]: time="2024-12-13T01:31:42.340944149Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:31:42.342073 containerd[1957]: time="2024-12-13T01:31:42.340970683Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:31:42.342073 containerd[1957]: time="2024-12-13T01:31:42.340996857Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:31:42.342073 containerd[1957]: time="2024-12-13T01:31:42.341025858Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:31:42.342679 containerd[1957]: time="2024-12-13T01:31:42.341054727Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:31:42.342679 containerd[1957]: time="2024-12-13T01:31:42.341076591Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:31:42.342679 containerd[1957]: time="2024-12-13T01:31:42.341101959Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:31:42.350758 containerd[1957]: time="2024-12-13T01:31:42.349717233Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:31:42.351392 containerd[1957]: time="2024-12-13T01:31:42.351344404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:31:42.355409 containerd[1957]: time="2024-12-13T01:31:42.351518402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:31:42.355409 containerd[1957]: time="2024-12-13T01:31:42.353566696Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:31:42.355409 containerd[1957]: time="2024-12-13T01:31:42.353645374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:31:42.355409 containerd[1957]: time="2024-12-13T01:31:42.353670309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:31:42.355409 containerd[1957]: time="2024-12-13T01:31:42.353699001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:31:42.355409 containerd[1957]: time="2024-12-13T01:31:42.353756145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:31:42.355409 containerd[1957]: time="2024-12-13T01:31:42.353800499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:31:42.355409 containerd[1957]: time="2024-12-13T01:31:42.353828267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:31:42.355409 containerd[1957]: time="2024-12-13T01:31:42.355185368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:31:42.355409 containerd[1957]: time="2024-12-13T01:31:42.355228313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:31:42.356246 amazon-ssm-agent[2039]: 2024-12-13 01:31:41 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:31:42.359251 sshd[2165]: Accepted publickey for core from 139.178.68.195 port 38428 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:42.364170 containerd[1957]: time="2024-12-13T01:31:42.355378726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:31:42.364170 containerd[1957]: time="2024-12-13T01:31:42.360527709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:31:42.364170 containerd[1957]: time="2024-12-13T01:31:42.360568946Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:31:42.364170 containerd[1957]: time="2024-12-13T01:31:42.360612260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:31:42.364170 containerd[1957]: time="2024-12-13T01:31:42.360633468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:31:42.364170 containerd[1957]: time="2024-12-13T01:31:42.360652865Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:31:42.364170 containerd[1957]: time="2024-12-13T01:31:42.360720376Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:31:42.364170 containerd[1957]: time="2024-12-13T01:31:42.360746287Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:31:42.364170 containerd[1957]: time="2024-12-13T01:31:42.360764735Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:31:42.364170 containerd[1957]: time="2024-12-13T01:31:42.360784977Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:31:42.364170 containerd[1957]: time="2024-12-13T01:31:42.360799246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:31:42.364170 containerd[1957]: time="2024-12-13T01:31:42.360818357Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:31:42.364170 containerd[1957]: time="2024-12-13T01:31:42.360834088Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:31:42.364170 containerd[1957]: time="2024-12-13T01:31:42.360849196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:31:42.364914 containerd[1957]: time="2024-12-13T01:31:42.361393516Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:31:42.364914 containerd[1957]: time="2024-12-13T01:31:42.361484002Z" level=info msg="Connect containerd service" Dec 13 01:31:42.364914 containerd[1957]: time="2024-12-13T01:31:42.361541578Z" level=info msg="using legacy CRI server" Dec 13 01:31:42.364914 containerd[1957]: time="2024-12-13T01:31:42.361552210Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:31:42.364914 containerd[1957]: time="2024-12-13T01:31:42.361686612Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:31:42.365861 sshd[2165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:42.372917 containerd[1957]: time="2024-12-13T01:31:42.372809861Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:31:42.374003 containerd[1957]: time="2024-12-13T01:31:42.372978026Z" level=info msg="Start subscribing containerd event" Dec 13 01:31:42.374003 containerd[1957]: time="2024-12-13T01:31:42.373234084Z" level=info msg="Start recovering state" Dec 13 01:31:42.374003 containerd[1957]: time="2024-12-13T01:31:42.373346107Z" level=info msg="Start event monitor" Dec 13 01:31:42.374003 containerd[1957]: time="2024-12-13T01:31:42.373369513Z" level=info msg="Start snapshots syncer" Dec 13 01:31:42.374003 containerd[1957]: time="2024-12-13T01:31:42.373386981Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:31:42.374003 containerd[1957]: time="2024-12-13T01:31:42.373398373Z" level=info msg="Start streaming server" Dec 13 01:31:42.374365 containerd[1957]: time="2024-12-13T01:31:42.374008932Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:31:42.374365 containerd[1957]: time="2024-12-13T01:31:42.374084072Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:31:42.375323 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:31:42.382201 containerd[1957]: time="2024-12-13T01:31:42.375449999Z" level=info msg="containerd successfully booted in 0.263162s" Dec 13 01:31:42.402237 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:31:42.421861 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:31:42.440261 systemd-logind[1946]: New session 1 of user core. Dec 13 01:31:42.453155 amazon-ssm-agent[2039]: 2024-12-13 01:31:41 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:31:42.468731 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:31:42.481730 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:31:42.516411 (systemd)[2180]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:31:42.553319 amazon-ssm-agent[2039]: 2024-12-13 01:31:41 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:31:42.652857 amazon-ssm-agent[2039]: 2024-12-13 01:31:41 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Dec 13 01:31:42.761208 amazon-ssm-agent[2039]: 2024-12-13 01:31:41 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Dec 13 01:31:42.817338 systemd[2180]: Queued start job for default target default.target. Dec 13 01:31:42.824442 systemd[2180]: Created slice app.slice - User Application Slice. Dec 13 01:31:42.824481 systemd[2180]: Reached target paths.target - Paths. Dec 13 01:31:42.824501 systemd[2180]: Reached target timers.target - Timers. Dec 13 01:31:42.835720 systemd[2180]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:31:42.861855 amazon-ssm-agent[2039]: 2024-12-13 01:31:41 INFO [amazon-ssm-agent] Starting Core Agent Dec 13 01:31:42.881859 systemd[2180]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:31:42.882017 systemd[2180]: Reached target sockets.target - Sockets. Dec 13 01:31:42.882041 systemd[2180]: Reached target basic.target - Basic System. Dec 13 01:31:42.882106 systemd[2180]: Reached target default.target - Main User Target. Dec 13 01:31:42.882210 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:31:42.882280 systemd[2180]: Startup finished in 352ms. Dec 13 01:31:42.892363 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:31:42.963195 amazon-ssm-agent[2039]: 2024-12-13 01:31:41 INFO [amazon-ssm-agent] registrar detected. Attempting registration Dec 13 01:31:43.037782 tar[1953]: linux-amd64/LICENSE Dec 13 01:31:43.042444 tar[1953]: linux-amd64/README.md Dec 13 01:31:43.063333 amazon-ssm-agent[2039]: 2024-12-13 01:31:41 INFO [Registrar] Starting registrar module Dec 13 01:31:43.071802 systemd[1]: Started sshd@1-172.31.29.36:22-139.178.68.195:38444.service - OpenSSH per-connection server daemon (139.178.68.195:38444). Dec 13 01:31:43.080075 amazon-ssm-agent[2039]: 2024-12-13 01:31:41 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Dec 13 01:31:43.079969 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:31:43.085238 amazon-ssm-agent[2039]: 2024-12-13 01:31:43 INFO [EC2Identity] EC2 registration was successful. Dec 13 01:31:43.085238 amazon-ssm-agent[2039]: 2024-12-13 01:31:43 INFO [CredentialRefresher] credentialRefresher has started Dec 13 01:31:43.086625 amazon-ssm-agent[2039]: 2024-12-13 01:31:43 INFO [CredentialRefresher] Starting credentials refresher loop Dec 13 01:31:43.086625 amazon-ssm-agent[2039]: 2024-12-13 01:31:43 INFO EC2RoleProvider Successfully connected with instance profile role credentials Dec 13 01:31:43.163365 amazon-ssm-agent[2039]: 2024-12-13 01:31:43 INFO [CredentialRefresher] Next credential rotation will be in 32.3415620427 minutes Dec 13 01:31:43.245973 sshd[2195]: Accepted publickey for core from 139.178.68.195 port 38444 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:43.247047 sshd[2195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:43.253469 systemd-logind[1946]: New session 2 of user core. Dec 13 01:31:43.261360 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:31:43.383560 sshd[2195]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:43.388813 systemd[1]: sshd@1-172.31.29.36:22-139.178.68.195:38444.service: Deactivated successfully. Dec 13 01:31:43.391801 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:31:43.393174 systemd-logind[1946]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:31:43.395566 systemd-logind[1946]: Removed session 2. Dec 13 01:31:43.414266 systemd[1]: Started sshd@2-172.31.29.36:22-139.178.68.195:38458.service - OpenSSH per-connection server daemon (139.178.68.195:38458). Dec 13 01:31:43.488407 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:43.491200 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:31:43.491716 (kubelet)[2210]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:31:43.494829 systemd[1]: Startup finished in 806ms (kernel) + 11.324s (initrd) + 8.476s (userspace) = 20.607s. Dec 13 01:31:43.604844 sshd[2203]: Accepted publickey for core from 139.178.68.195 port 38458 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:43.606339 sshd[2203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:43.628784 systemd-logind[1946]: New session 3 of user core. Dec 13 01:31:43.639438 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:31:43.788806 sshd[2203]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:43.799910 systemd-logind[1946]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:31:43.800436 systemd[1]: sshd@2-172.31.29.36:22-139.178.68.195:38458.service: Deactivated successfully. Dec 13 01:31:43.804179 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:31:43.806234 systemd-logind[1946]: Removed session 3. Dec 13 01:31:43.809832 ntpd[1940]: Listen normally on 6 eth0 [fe80::49f:fcff:fe26:3d69%2]:123 Dec 13 01:31:43.811071 ntpd[1940]: 13 Dec 01:31:43 ntpd[1940]: Listen normally on 6 eth0 [fe80::49f:fcff:fe26:3d69%2]:123 Dec 13 01:31:44.128021 amazon-ssm-agent[2039]: 2024-12-13 01:31:44 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Dec 13 01:31:44.226624 amazon-ssm-agent[2039]: 2024-12-13 01:31:44 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2224) started Dec 13 01:31:44.330285 amazon-ssm-agent[2039]: 2024-12-13 01:31:44 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Dec 13 01:31:44.814784 kubelet[2210]: E1213 01:31:44.814657 2210 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:31:44.818245 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:31:44.818449 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:44.819095 systemd[1]: kubelet.service: Consumed 1.089s CPU time. Dec 13 01:31:48.430317 systemd-resolved[1763]: Clock change detected. Flushing caches. Dec 13 01:31:54.432400 systemd[1]: Started sshd@3-172.31.29.36:22-139.178.68.195:55350.service - OpenSSH per-connection server daemon (139.178.68.195:55350). Dec 13 01:31:54.611564 sshd[2239]: Accepted publickey for core from 139.178.68.195 port 55350 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:54.613230 sshd[2239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:54.618582 systemd-logind[1946]: New session 4 of user core. Dec 13 01:31:54.628561 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:31:54.749019 sshd[2239]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:54.759923 systemd[1]: sshd@3-172.31.29.36:22-139.178.68.195:55350.service: Deactivated successfully. Dec 13 01:31:54.762899 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:31:54.763724 systemd-logind[1946]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:31:54.765236 systemd-logind[1946]: Removed session 4. Dec 13 01:31:54.806878 systemd[1]: Started sshd@4-172.31.29.36:22-139.178.68.195:55358.service - OpenSSH per-connection server daemon (139.178.68.195:55358). Dec 13 01:31:54.967013 sshd[2246]: Accepted publickey for core from 139.178.68.195 port 55358 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:54.968747 sshd[2246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:54.975578 systemd-logind[1946]: New session 5 of user core. Dec 13 01:31:54.978628 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:31:55.097926 sshd[2246]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:55.102279 systemd[1]: sshd@4-172.31.29.36:22-139.178.68.195:55358.service: Deactivated successfully. Dec 13 01:31:55.105473 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:31:55.107517 systemd-logind[1946]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:31:55.109216 systemd-logind[1946]: Removed session 5. Dec 13 01:31:55.145717 systemd[1]: Started sshd@5-172.31.29.36:22-139.178.68.195:55364.service - OpenSSH per-connection server daemon (139.178.68.195:55364). Dec 13 01:31:55.304622 sshd[2253]: Accepted publickey for core from 139.178.68.195 port 55364 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:55.306513 sshd[2253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:55.319985 systemd-logind[1946]: New session 6 of user core. Dec 13 01:31:55.332584 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:31:55.437106 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:31:55.445937 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:31:55.460118 sshd[2253]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:55.466970 systemd[1]: sshd@5-172.31.29.36:22-139.178.68.195:55364.service: Deactivated successfully. Dec 13 01:31:55.473630 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:31:55.477697 systemd-logind[1946]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:31:55.482873 systemd-logind[1946]: Removed session 6. Dec 13 01:31:55.494692 systemd[1]: Started sshd@6-172.31.29.36:22-139.178.68.195:55366.service - OpenSSH per-connection server daemon (139.178.68.195:55366). Dec 13 01:31:55.687579 sshd[2263]: Accepted publickey for core from 139.178.68.195 port 55366 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:55.691173 sshd[2263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:55.699386 systemd-logind[1946]: New session 7 of user core. Dec 13 01:31:55.709671 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:31:55.917696 sudo[2266]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:31:55.918251 sudo[2266]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:31:55.936554 sudo[2266]: pam_unix(sudo:session): session closed for user root Dec 13 01:31:55.964424 sshd[2263]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:55.970559 systemd[1]: sshd@6-172.31.29.36:22-139.178.68.195:55366.service: Deactivated successfully. Dec 13 01:31:55.974649 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:31:55.978925 systemd-logind[1946]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:31:55.983282 systemd-logind[1946]: Removed session 7. Dec 13 01:31:56.020818 systemd[1]: Started sshd@7-172.31.29.36:22-139.178.68.195:55378.service - OpenSSH per-connection server daemon (139.178.68.195:55378). Dec 13 01:31:56.200762 sshd[2271]: Accepted publickey for core from 139.178.68.195 port 55378 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:56.203578 sshd[2271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:56.211328 systemd-logind[1946]: New session 8 of user core. Dec 13 01:31:56.221009 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:31:56.330710 sudo[2275]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:31:56.331395 sudo[2275]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:31:56.335986 sudo[2275]: pam_unix(sudo:session): session closed for user root Dec 13 01:31:56.343140 sudo[2274]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:31:56.343928 sudo[2274]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:31:56.373229 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:31:56.380233 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:31:56.385931 (kubelet)[2282]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:31:56.400963 auditctl[2284]: No rules Dec 13 01:31:56.402696 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:31:56.404043 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:31:56.413864 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:31:56.470389 augenrules[2308]: No rules Dec 13 01:31:56.470322 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:31:56.475227 sudo[2274]: pam_unix(sudo:session): session closed for user root Dec 13 01:31:56.496499 kubelet[2282]: E1213 01:31:56.496062 2282 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:31:56.507726 sshd[2271]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:56.512201 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:31:56.519202 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:31:56.520841 systemd-logind[1946]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:31:56.523571 systemd[1]: sshd@7-172.31.29.36:22-139.178.68.195:55378.service: Deactivated successfully. Dec 13 01:31:56.535051 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:31:56.551720 systemd[1]: Started sshd@8-172.31.29.36:22-139.178.68.195:47784.service - OpenSSH per-connection server daemon (139.178.68.195:47784). Dec 13 01:31:56.553625 systemd-logind[1946]: Removed session 8. Dec 13 01:31:56.722220 sshd[2317]: Accepted publickey for core from 139.178.68.195 port 47784 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:31:56.724018 sshd[2317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:56.731392 systemd-logind[1946]: New session 9 of user core. Dec 13 01:31:56.736564 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:31:56.841861 sudo[2320]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:31:56.842266 sudo[2320]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:31:57.571681 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:31:57.572407 (dockerd)[2336]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:31:58.328085 dockerd[2336]: time="2024-12-13T01:31:58.328023613Z" level=info msg="Starting up" Dec 13 01:31:58.566471 dockerd[2336]: time="2024-12-13T01:31:58.566419938Z" level=info msg="Loading containers: start." Dec 13 01:31:58.823425 kernel: Initializing XFRM netlink socket Dec 13 01:31:58.887879 (udev-worker)[2358]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:31:58.996176 systemd-networkd[1809]: docker0: Link UP Dec 13 01:31:59.029110 dockerd[2336]: time="2024-12-13T01:31:59.029056617Z" level=info msg="Loading containers: done." Dec 13 01:31:59.054839 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2281500406-merged.mount: Deactivated successfully. Dec 13 01:31:59.068923 dockerd[2336]: time="2024-12-13T01:31:59.068798100Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:31:59.069201 dockerd[2336]: time="2024-12-13T01:31:59.069009588Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:31:59.069201 dockerd[2336]: time="2024-12-13T01:31:59.069177035Z" level=info msg="Daemon has completed initialization" Dec 13 01:31:59.129318 dockerd[2336]: time="2024-12-13T01:31:59.128220800Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:31:59.128492 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:32:00.694524 containerd[1957]: time="2024-12-13T01:32:00.694399834Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 01:32:01.620140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4257100703.mount: Deactivated successfully. Dec 13 01:32:04.955072 containerd[1957]: time="2024-12-13T01:32:04.955012120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:04.957162 containerd[1957]: time="2024-12-13T01:32:04.957090394Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139254" Dec 13 01:32:04.959918 containerd[1957]: time="2024-12-13T01:32:04.959786568Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:04.964507 containerd[1957]: time="2024-12-13T01:32:04.964458174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:04.966612 containerd[1957]: time="2024-12-13T01:32:04.965923355Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 4.271391879s" Dec 13 01:32:04.966612 containerd[1957]: time="2024-12-13T01:32:04.965981686Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 01:32:04.999748 containerd[1957]: time="2024-12-13T01:32:04.999564638Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 01:32:06.528902 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:32:06.538978 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:32:07.984081 containerd[1957]: time="2024-12-13T01:32:07.984020914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:08.008093 containerd[1957]: time="2024-12-13T01:32:08.007989433Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217732" Dec 13 01:32:08.024276 containerd[1957]: time="2024-12-13T01:32:08.024109774Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:08.061326 containerd[1957]: time="2024-12-13T01:32:08.060364906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:08.062802 containerd[1957]: time="2024-12-13T01:32:08.062746708Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 3.063128524s" Dec 13 01:32:08.062924 containerd[1957]: time="2024-12-13T01:32:08.062808064Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 01:32:08.105648 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:32:08.115573 containerd[1957]: time="2024-12-13T01:32:08.115533289Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 01:32:08.119969 (kubelet)[2556]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:32:08.231435 kubelet[2556]: E1213 01:32:08.231370 2556 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:32:08.235262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:32:08.235649 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:32:09.601141 containerd[1957]: time="2024-12-13T01:32:09.601085193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:09.614021 containerd[1957]: time="2024-12-13T01:32:09.613949383Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332822" Dec 13 01:32:09.621369 containerd[1957]: time="2024-12-13T01:32:09.621256682Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:09.632913 containerd[1957]: time="2024-12-13T01:32:09.632709834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:09.634561 containerd[1957]: time="2024-12-13T01:32:09.634279062Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.518537024s" Dec 13 01:32:09.634561 containerd[1957]: time="2024-12-13T01:32:09.634345751Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 01:32:09.659608 containerd[1957]: time="2024-12-13T01:32:09.659559732Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:32:11.190149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3334126096.mount: Deactivated successfully. Dec 13 01:32:11.972946 containerd[1957]: time="2024-12-13T01:32:11.972879653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:11.975591 containerd[1957]: time="2024-12-13T01:32:11.975325343Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Dec 13 01:32:11.978197 containerd[1957]: time="2024-12-13T01:32:11.977396206Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:11.981699 containerd[1957]: time="2024-12-13T01:32:11.981630151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:11.982782 containerd[1957]: time="2024-12-13T01:32:11.982573611Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 2.322971965s" Dec 13 01:32:11.982782 containerd[1957]: time="2024-12-13T01:32:11.982617063Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 01:32:12.026624 containerd[1957]: time="2024-12-13T01:32:12.026382329Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:32:12.349138 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 01:32:12.728235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3445964418.mount: Deactivated successfully. Dec 13 01:32:14.193608 containerd[1957]: time="2024-12-13T01:32:14.193270579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:14.196498 containerd[1957]: time="2024-12-13T01:32:14.196413374Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 01:32:14.199955 containerd[1957]: time="2024-12-13T01:32:14.199875267Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:14.215915 containerd[1957]: time="2024-12-13T01:32:14.215841286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:14.217740 containerd[1957]: time="2024-12-13T01:32:14.217444781Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.191013371s" Dec 13 01:32:14.217740 containerd[1957]: time="2024-12-13T01:32:14.217498428Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:32:14.257650 containerd[1957]: time="2024-12-13T01:32:14.257603386Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:32:14.872706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount898604222.mount: Deactivated successfully. Dec 13 01:32:14.887867 containerd[1957]: time="2024-12-13T01:32:14.887795555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:14.889855 containerd[1957]: time="2024-12-13T01:32:14.889649548Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Dec 13 01:32:14.893326 containerd[1957]: time="2024-12-13T01:32:14.891923498Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:14.896627 containerd[1957]: time="2024-12-13T01:32:14.896575995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:14.897632 containerd[1957]: time="2024-12-13T01:32:14.897594051Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 639.948288ms" Dec 13 01:32:14.897882 containerd[1957]: time="2024-12-13T01:32:14.897855078Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 01:32:14.927529 containerd[1957]: time="2024-12-13T01:32:14.927483119Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 01:32:15.544956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2153685197.mount: Deactivated successfully. Dec 13 01:32:18.279131 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 01:32:18.286677 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:32:18.824502 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:32:18.834362 (kubelet)[2702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:32:18.979615 kubelet[2702]: E1213 01:32:18.979458 2702 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:32:18.985678 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:32:18.986043 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:32:19.124782 containerd[1957]: time="2024-12-13T01:32:19.124283339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:19.126860 containerd[1957]: time="2024-12-13T01:32:19.126610697Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Dec 13 01:32:19.130324 containerd[1957]: time="2024-12-13T01:32:19.129114790Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:19.134115 containerd[1957]: time="2024-12-13T01:32:19.134040133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:19.135818 containerd[1957]: time="2024-12-13T01:32:19.135466361Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 4.207942424s" Dec 13 01:32:19.135818 containerd[1957]: time="2024-12-13T01:32:19.135514716Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 01:32:23.237833 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:32:23.244762 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:32:23.280713 systemd[1]: Reloading requested from client PID 2776 ('systemctl') (unit session-9.scope)... Dec 13 01:32:23.280734 systemd[1]: Reloading... Dec 13 01:32:23.428378 zram_generator::config[2817]: No configuration found. Dec 13 01:32:23.592143 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:32:23.698695 systemd[1]: Reloading finished in 417 ms. Dec 13 01:32:23.771721 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:32:23.771869 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:32:23.772195 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:32:23.778909 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:32:24.514528 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:32:24.520156 (kubelet)[2874]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:32:24.594110 kubelet[2874]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:32:24.594638 kubelet[2874]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:32:24.594638 kubelet[2874]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:32:24.604802 kubelet[2874]: I1213 01:32:24.604704 2874 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:32:25.310774 kubelet[2874]: I1213 01:32:25.310691 2874 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:32:25.310774 kubelet[2874]: I1213 01:32:25.310774 2874 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:32:25.311566 kubelet[2874]: I1213 01:32:25.311501 2874 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:32:25.364226 kubelet[2874]: E1213 01:32:25.363221 2874 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.29.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.29.36:6443: connect: connection refused Dec 13 01:32:25.364226 kubelet[2874]: I1213 01:32:25.363339 2874 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:32:25.397474 kubelet[2874]: I1213 01:32:25.397425 2874 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:32:25.398683 kubelet[2874]: I1213 01:32:25.398648 2874 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:32:25.400428 kubelet[2874]: I1213 01:32:25.400389 2874 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:32:25.400608 kubelet[2874]: I1213 01:32:25.400434 2874 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:32:25.400608 kubelet[2874]: I1213 01:32:25.400450 2874 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:32:25.400608 kubelet[2874]: I1213 01:32:25.400593 2874 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:32:25.400739 kubelet[2874]: I1213 01:32:25.400722 2874 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:32:25.400780 kubelet[2874]: I1213 01:32:25.400742 2874 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:32:25.400780 kubelet[2874]: I1213 01:32:25.400776 2874 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:32:25.400853 kubelet[2874]: I1213 01:32:25.400791 2874 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:32:25.405624 kubelet[2874]: W1213 01:32:25.404791 2874 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.29.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.36:6443: connect: connection refused Dec 13 01:32:25.405624 kubelet[2874]: E1213 01:32:25.404865 2874 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.29.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.36:6443: connect: connection refused Dec 13 01:32:25.408074 kubelet[2874]: I1213 01:32:25.408039 2874 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:32:25.417326 kubelet[2874]: W1213 01:32:25.416571 2874 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.29.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-36&limit=500&resourceVersion=0": dial tcp 172.31.29.36:6443: connect: connection refused Dec 13 01:32:25.417326 kubelet[2874]: E1213 01:32:25.416653 2874 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.29.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-36&limit=500&resourceVersion=0": dial tcp 172.31.29.36:6443: connect: connection refused Dec 13 01:32:25.417589 kubelet[2874]: I1213 01:32:25.417568 2874 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:32:25.418162 kubelet[2874]: W1213 01:32:25.417709 2874 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:32:25.418911 kubelet[2874]: I1213 01:32:25.418887 2874 server.go:1256] "Started kubelet" Dec 13 01:32:25.419220 kubelet[2874]: I1213 01:32:25.419202 2874 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:32:25.420381 kubelet[2874]: I1213 01:32:25.420360 2874 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:32:25.420815 kubelet[2874]: I1213 01:32:25.420797 2874 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:32:25.421744 kubelet[2874]: I1213 01:32:25.421715 2874 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:32:25.422277 kubelet[2874]: I1213 01:32:25.422002 2874 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:32:25.429621 kubelet[2874]: I1213 01:32:25.428986 2874 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:32:25.433641 kubelet[2874]: E1213 01:32:25.433499 2874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-36?timeout=10s\": dial tcp 172.31.29.36:6443: connect: connection refused" interval="200ms" Dec 13 01:32:25.436610 kubelet[2874]: E1213 01:32:25.435316 2874 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.29.36:6443/api/v1/namespaces/default/events\": dial tcp 172.31.29.36:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-29-36.181098831c32a528 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-36,UID:ip-172-31-29-36,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-36,},FirstTimestamp:2024-12-13 01:32:25.418859816 +0000 UTC m=+0.888322208,LastTimestamp:2024-12-13 01:32:25.418859816 +0000 UTC m=+0.888322208,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-36,}" Dec 13 01:32:25.441460 kubelet[2874]: I1213 01:32:25.440339 2874 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:32:25.441460 kubelet[2874]: I1213 01:32:25.440459 2874 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:32:25.441460 kubelet[2874]: I1213 01:32:25.441366 2874 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:32:25.441689 kubelet[2874]: I1213 01:32:25.441558 2874 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:32:25.444662 kubelet[2874]: I1213 01:32:25.444639 2874 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:32:25.461353 kubelet[2874]: W1213 01:32:25.458987 2874 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.29.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.36:6443: connect: connection refused Dec 13 01:32:25.461353 kubelet[2874]: E1213 01:32:25.459077 2874 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.29.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.36:6443: connect: connection refused Dec 13 01:32:25.470597 kubelet[2874]: I1213 01:32:25.470559 2874 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:32:25.477236 kubelet[2874]: I1213 01:32:25.477198 2874 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:32:25.477236 kubelet[2874]: I1213 01:32:25.477243 2874 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:32:25.477451 kubelet[2874]: I1213 01:32:25.477261 2874 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:32:25.477451 kubelet[2874]: E1213 01:32:25.477347 2874 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:32:25.479673 kubelet[2874]: W1213 01:32:25.479635 2874 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.29.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.36:6443: connect: connection refused Dec 13 01:32:25.479809 kubelet[2874]: E1213 01:32:25.479733 2874 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.29.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.36:6443: connect: connection refused Dec 13 01:32:25.485440 kubelet[2874]: I1213 01:32:25.485416 2874 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:32:25.485624 kubelet[2874]: I1213 01:32:25.485614 2874 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:32:25.485702 kubelet[2874]: I1213 01:32:25.485695 2874 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:32:25.489569 kubelet[2874]: I1213 01:32:25.489544 2874 policy_none.go:49] "None policy: Start" Dec 13 01:32:25.490791 kubelet[2874]: I1213 01:32:25.490772 2874 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:32:25.490925 kubelet[2874]: I1213 01:32:25.490915 2874 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:32:25.502796 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:32:25.524926 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:32:25.530690 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:32:25.532829 kubelet[2874]: I1213 01:32:25.532801 2874 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-36" Dec 13 01:32:25.533281 kubelet[2874]: E1213 01:32:25.533258 2874 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.36:6443/api/v1/nodes\": dial tcp 172.31.29.36:6443: connect: connection refused" node="ip-172-31-29-36" Dec 13 01:32:25.547014 kubelet[2874]: I1213 01:32:25.546968 2874 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:32:25.547546 kubelet[2874]: I1213 01:32:25.547512 2874 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:32:25.553889 kubelet[2874]: E1213 01:32:25.553848 2874 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-29-36\" not found" Dec 13 01:32:25.578064 kubelet[2874]: I1213 01:32:25.577886 2874 topology_manager.go:215] "Topology Admit Handler" podUID="c8d5c0784e80780a289d2da3a92e981f" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-29-36" Dec 13 01:32:25.584997 kubelet[2874]: I1213 01:32:25.584953 2874 topology_manager.go:215] "Topology Admit Handler" podUID="1d4acbfaef23ef94975c05e4994182cf" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-29-36" Dec 13 01:32:25.587427 kubelet[2874]: I1213 01:32:25.587166 2874 topology_manager.go:215] "Topology Admit Handler" podUID="dd3e0a8b569ebeb8c6321ca8c59c83e5" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-29-36" Dec 13 01:32:25.599886 systemd[1]: Created slice kubepods-burstable-podc8d5c0784e80780a289d2da3a92e981f.slice - libcontainer container kubepods-burstable-podc8d5c0784e80780a289d2da3a92e981f.slice. Dec 13 01:32:25.616927 systemd[1]: Created slice kubepods-burstable-pod1d4acbfaef23ef94975c05e4994182cf.slice - libcontainer container kubepods-burstable-pod1d4acbfaef23ef94975c05e4994182cf.slice. Dec 13 01:32:25.623306 systemd[1]: Created slice kubepods-burstable-poddd3e0a8b569ebeb8c6321ca8c59c83e5.slice - libcontainer container kubepods-burstable-poddd3e0a8b569ebeb8c6321ca8c59c83e5.slice. Dec 13 01:32:25.637364 kubelet[2874]: E1213 01:32:25.637327 2874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-36?timeout=10s\": dial tcp 172.31.29.36:6443: connect: connection refused" interval="400ms" Dec 13 01:32:25.735855 kubelet[2874]: I1213 01:32:25.735817 2874 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-36" Dec 13 01:32:25.736373 kubelet[2874]: E1213 01:32:25.736348 2874 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.36:6443/api/v1/nodes\": dial tcp 172.31.29.36:6443: connect: connection refused" node="ip-172-31-29-36" Dec 13 01:32:25.742749 kubelet[2874]: I1213 01:32:25.742636 2874 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1d4acbfaef23ef94975c05e4994182cf-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-36\" (UID: \"1d4acbfaef23ef94975c05e4994182cf\") " pod="kube-system/kube-controller-manager-ip-172-31-29-36" Dec 13 01:32:25.742749 kubelet[2874]: I1213 01:32:25.742699 2874 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c8d5c0784e80780a289d2da3a92e981f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-36\" (UID: \"c8d5c0784e80780a289d2da3a92e981f\") " pod="kube-system/kube-apiserver-ip-172-31-29-36" Dec 13 01:32:25.742749 kubelet[2874]: I1213 01:32:25.742729 2874 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1d4acbfaef23ef94975c05e4994182cf-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-36\" (UID: \"1d4acbfaef23ef94975c05e4994182cf\") " pod="kube-system/kube-controller-manager-ip-172-31-29-36" Dec 13 01:32:25.743128 kubelet[2874]: I1213 01:32:25.742766 2874 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1d4acbfaef23ef94975c05e4994182cf-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-36\" (UID: \"1d4acbfaef23ef94975c05e4994182cf\") " pod="kube-system/kube-controller-manager-ip-172-31-29-36" Dec 13 01:32:25.743128 kubelet[2874]: I1213 01:32:25.742796 2874 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3e0a8b569ebeb8c6321ca8c59c83e5-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-36\" (UID: \"dd3e0a8b569ebeb8c6321ca8c59c83e5\") " pod="kube-system/kube-scheduler-ip-172-31-29-36" Dec 13 01:32:25.743128 kubelet[2874]: I1213 01:32:25.742824 2874 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c8d5c0784e80780a289d2da3a92e981f-ca-certs\") pod \"kube-apiserver-ip-172-31-29-36\" (UID: \"c8d5c0784e80780a289d2da3a92e981f\") " pod="kube-system/kube-apiserver-ip-172-31-29-36" Dec 13 01:32:25.743128 kubelet[2874]: I1213 01:32:25.742850 2874 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c8d5c0784e80780a289d2da3a92e981f-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-36\" (UID: \"c8d5c0784e80780a289d2da3a92e981f\") " pod="kube-system/kube-apiserver-ip-172-31-29-36" Dec 13 01:32:25.743128 kubelet[2874]: I1213 01:32:25.742892 2874 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1d4acbfaef23ef94975c05e4994182cf-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-36\" (UID: \"1d4acbfaef23ef94975c05e4994182cf\") " pod="kube-system/kube-controller-manager-ip-172-31-29-36" Dec 13 01:32:25.743256 kubelet[2874]: I1213 01:32:25.742936 2874 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1d4acbfaef23ef94975c05e4994182cf-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-36\" (UID: \"1d4acbfaef23ef94975c05e4994182cf\") " pod="kube-system/kube-controller-manager-ip-172-31-29-36" Dec 13 01:32:25.914964 containerd[1957]: time="2024-12-13T01:32:25.914764594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-36,Uid:c8d5c0784e80780a289d2da3a92e981f,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:25.926934 containerd[1957]: time="2024-12-13T01:32:25.926636857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-36,Uid:1d4acbfaef23ef94975c05e4994182cf,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:25.927719 containerd[1957]: time="2024-12-13T01:32:25.927677310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-36,Uid:dd3e0a8b569ebeb8c6321ca8c59c83e5,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:26.038491 kubelet[2874]: E1213 01:32:26.038359 2874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-36?timeout=10s\": dial tcp 172.31.29.36:6443: connect: connection refused" interval="800ms" Dec 13 01:32:26.138613 kubelet[2874]: I1213 01:32:26.138573 2874 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-36" Dec 13 01:32:26.139045 kubelet[2874]: E1213 01:32:26.139007 2874 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.36:6443/api/v1/nodes\": dial tcp 172.31.29.36:6443: connect: connection refused" node="ip-172-31-29-36" Dec 13 01:32:26.287188 kubelet[2874]: W1213 01:32:26.287042 2874 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.29.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.36:6443: connect: connection refused Dec 13 01:32:26.287188 kubelet[2874]: E1213 01:32:26.287196 2874 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.29.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.36:6443: connect: connection refused Dec 13 01:32:26.394442 kubelet[2874]: W1213 01:32:26.394401 2874 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.29.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-36&limit=500&resourceVersion=0": dial tcp 172.31.29.36:6443: connect: connection refused Dec 13 01:32:26.394442 kubelet[2874]: E1213 01:32:26.394445 2874 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.29.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-36&limit=500&resourceVersion=0": dial tcp 172.31.29.36:6443: connect: connection refused Dec 13 01:32:26.500931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3046675270.mount: Deactivated successfully. Dec 13 01:32:26.507752 update_engine[1948]: I20241213 01:32:26.507679 1948 update_attempter.cc:509] Updating boot flags... Dec 13 01:32:26.517573 containerd[1957]: time="2024-12-13T01:32:26.517523992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:32:26.535196 containerd[1957]: time="2024-12-13T01:32:26.534696968Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 01:32:26.536935 containerd[1957]: time="2024-12-13T01:32:26.536853190Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:32:26.540834 containerd[1957]: time="2024-12-13T01:32:26.540427065Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:32:26.543768 containerd[1957]: time="2024-12-13T01:32:26.543730300Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:32:26.547202 containerd[1957]: time="2024-12-13T01:32:26.547148198Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:32:26.549517 containerd[1957]: time="2024-12-13T01:32:26.549467465Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:32:26.555786 containerd[1957]: time="2024-12-13T01:32:26.555364868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:32:26.560391 containerd[1957]: time="2024-12-13T01:32:26.560264291Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 645.326857ms" Dec 13 01:32:26.595866 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (2923) Dec 13 01:32:26.600765 containerd[1957]: time="2024-12-13T01:32:26.600715655Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 673.98875ms" Dec 13 01:32:26.604357 containerd[1957]: time="2024-12-13T01:32:26.604274415Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 676.507034ms" Dec 13 01:32:26.738103 kubelet[2874]: W1213 01:32:26.737904 2874 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.29.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.36:6443: connect: connection refused Dec 13 01:32:26.739090 kubelet[2874]: E1213 01:32:26.738175 2874 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.29.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.36:6443: connect: connection refused Dec 13 01:32:26.849164 kubelet[2874]: E1213 01:32:26.849048 2874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-36?timeout=10s\": dial tcp 172.31.29.36:6443: connect: connection refused" interval="1.6s" Dec 13 01:32:26.942455 kubelet[2874]: I1213 01:32:26.942122 2874 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-36" Dec 13 01:32:26.943196 kubelet[2874]: E1213 01:32:26.943096 2874 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.36:6443/api/v1/nodes\": dial tcp 172.31.29.36:6443: connect: connection refused" node="ip-172-31-29-36" Dec 13 01:32:26.986259 kubelet[2874]: W1213 01:32:26.986186 2874 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.29.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.36:6443: connect: connection refused Dec 13 01:32:26.988444 kubelet[2874]: E1213 01:32:26.988414 2874 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.29.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.36:6443: connect: connection refused Dec 13 01:32:27.059110 containerd[1957]: time="2024-12-13T01:32:27.058977613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:27.070057 containerd[1957]: time="2024-12-13T01:32:27.069494741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:27.070057 containerd[1957]: time="2024-12-13T01:32:27.069548121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:27.070057 containerd[1957]: time="2024-12-13T01:32:27.069708860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:27.070403 containerd[1957]: time="2024-12-13T01:32:27.069466618Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:27.070403 containerd[1957]: time="2024-12-13T01:32:27.069546040Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:27.070403 containerd[1957]: time="2024-12-13T01:32:27.069570370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:27.070403 containerd[1957]: time="2024-12-13T01:32:27.069670504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:27.084894 containerd[1957]: time="2024-12-13T01:32:27.084757902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:27.086448 containerd[1957]: time="2024-12-13T01:32:27.086372904Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:27.087035 containerd[1957]: time="2024-12-13T01:32:27.086898368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:27.087631 containerd[1957]: time="2024-12-13T01:32:27.087395174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:27.113559 systemd[1]: Started cri-containerd-e6748d13b23b0326004ab7d2e7346a4adb87eddd6b888c02ef92da0fcf0458af.scope - libcontainer container e6748d13b23b0326004ab7d2e7346a4adb87eddd6b888c02ef92da0fcf0458af. Dec 13 01:32:27.121849 systemd[1]: Started cri-containerd-949815fb0728adb07ee1e9b38e7da468d867717121cd051e8fcabcd7e8fb1d52.scope - libcontainer container 949815fb0728adb07ee1e9b38e7da468d867717121cd051e8fcabcd7e8fb1d52. Dec 13 01:32:27.163562 systemd[1]: Started cri-containerd-fb592d3cdb5b66650e959f51c1ad281b0ec77d93221218c90b4bb9d33ec2a833.scope - libcontainer container fb592d3cdb5b66650e959f51c1ad281b0ec77d93221218c90b4bb9d33ec2a833. Dec 13 01:32:27.239419 containerd[1957]: time="2024-12-13T01:32:27.239373411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-36,Uid:c8d5c0784e80780a289d2da3a92e981f,Namespace:kube-system,Attempt:0,} returns sandbox id \"949815fb0728adb07ee1e9b38e7da468d867717121cd051e8fcabcd7e8fb1d52\"" Dec 13 01:32:27.257233 containerd[1957]: time="2024-12-13T01:32:27.256999518Z" level=info msg="CreateContainer within sandbox \"949815fb0728adb07ee1e9b38e7da468d867717121cd051e8fcabcd7e8fb1d52\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:32:27.264670 containerd[1957]: time="2024-12-13T01:32:27.264528535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-36,Uid:dd3e0a8b569ebeb8c6321ca8c59c83e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6748d13b23b0326004ab7d2e7346a4adb87eddd6b888c02ef92da0fcf0458af\"" Dec 13 01:32:27.273823 containerd[1957]: time="2024-12-13T01:32:27.273625067Z" level=info msg="CreateContainer within sandbox \"e6748d13b23b0326004ab7d2e7346a4adb87eddd6b888c02ef92da0fcf0458af\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:32:27.282452 containerd[1957]: time="2024-12-13T01:32:27.282411897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-36,Uid:1d4acbfaef23ef94975c05e4994182cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb592d3cdb5b66650e959f51c1ad281b0ec77d93221218c90b4bb9d33ec2a833\"" Dec 13 01:32:27.294669 containerd[1957]: time="2024-12-13T01:32:27.294577043Z" level=info msg="CreateContainer within sandbox \"fb592d3cdb5b66650e959f51c1ad281b0ec77d93221218c90b4bb9d33ec2a833\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:32:27.325507 containerd[1957]: time="2024-12-13T01:32:27.325458162Z" level=info msg="CreateContainer within sandbox \"949815fb0728adb07ee1e9b38e7da468d867717121cd051e8fcabcd7e8fb1d52\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3e2e52eb4790ae975da433069d80e92852e55a93d650463ffedd942809df20d4\"" Dec 13 01:32:27.326242 containerd[1957]: time="2024-12-13T01:32:27.326203006Z" level=info msg="StartContainer for \"3e2e52eb4790ae975da433069d80e92852e55a93d650463ffedd942809df20d4\"" Dec 13 01:32:27.349559 containerd[1957]: time="2024-12-13T01:32:27.349313198Z" level=info msg="CreateContainer within sandbox \"e6748d13b23b0326004ab7d2e7346a4adb87eddd6b888c02ef92da0fcf0458af\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"15268a03bf6f0e690a28fca67c951c8b35e3587d4f14f1df61d120b2c38fa961\"" Dec 13 01:32:27.350181 containerd[1957]: time="2024-12-13T01:32:27.350065576Z" level=info msg="StartContainer for \"15268a03bf6f0e690a28fca67c951c8b35e3587d4f14f1df61d120b2c38fa961\"" Dec 13 01:32:27.355600 containerd[1957]: time="2024-12-13T01:32:27.355556068Z" level=info msg="CreateContainer within sandbox \"fb592d3cdb5b66650e959f51c1ad281b0ec77d93221218c90b4bb9d33ec2a833\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ef85f3543b698674432a81a7dc2cd8f1cd97cfa30b1942753ec4c02a7c4e0075\"" Dec 13 01:32:27.356916 containerd[1957]: time="2024-12-13T01:32:27.356871975Z" level=info msg="StartContainer for \"ef85f3543b698674432a81a7dc2cd8f1cd97cfa30b1942753ec4c02a7c4e0075\"" Dec 13 01:32:27.382968 systemd[1]: Started cri-containerd-3e2e52eb4790ae975da433069d80e92852e55a93d650463ffedd942809df20d4.scope - libcontainer container 3e2e52eb4790ae975da433069d80e92852e55a93d650463ffedd942809df20d4. Dec 13 01:32:27.390694 kubelet[2874]: E1213 01:32:27.390441 2874 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.29.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.29.36:6443: connect: connection refused Dec 13 01:32:27.422949 systemd[1]: Started cri-containerd-15268a03bf6f0e690a28fca67c951c8b35e3587d4f14f1df61d120b2c38fa961.scope - libcontainer container 15268a03bf6f0e690a28fca67c951c8b35e3587d4f14f1df61d120b2c38fa961. Dec 13 01:32:27.433544 systemd[1]: Started cri-containerd-ef85f3543b698674432a81a7dc2cd8f1cd97cfa30b1942753ec4c02a7c4e0075.scope - libcontainer container ef85f3543b698674432a81a7dc2cd8f1cd97cfa30b1942753ec4c02a7c4e0075. Dec 13 01:32:27.495803 containerd[1957]: time="2024-12-13T01:32:27.495756647Z" level=info msg="StartContainer for \"3e2e52eb4790ae975da433069d80e92852e55a93d650463ffedd942809df20d4\" returns successfully" Dec 13 01:32:27.543318 containerd[1957]: time="2024-12-13T01:32:27.542625116Z" level=info msg="StartContainer for \"15268a03bf6f0e690a28fca67c951c8b35e3587d4f14f1df61d120b2c38fa961\" returns successfully" Dec 13 01:32:27.561413 containerd[1957]: time="2024-12-13T01:32:27.561271515Z" level=info msg="StartContainer for \"ef85f3543b698674432a81a7dc2cd8f1cd97cfa30b1942753ec4c02a7c4e0075\" returns successfully" Dec 13 01:32:28.546713 kubelet[2874]: I1213 01:32:28.546411 2874 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-36" Dec 13 01:32:30.749761 kubelet[2874]: E1213 01:32:30.749712 2874 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-29-36\" not found" node="ip-172-31-29-36" Dec 13 01:32:30.780177 kubelet[2874]: I1213 01:32:30.780099 2874 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-29-36" Dec 13 01:32:31.406698 kubelet[2874]: I1213 01:32:31.406354 2874 apiserver.go:52] "Watching apiserver" Dec 13 01:32:31.442393 kubelet[2874]: I1213 01:32:31.442341 2874 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:32:31.556141 kubelet[2874]: E1213 01:32:31.556094 2874 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-29-36\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-29-36" Dec 13 01:32:34.179650 systemd[1]: Reloading requested from client PID 3247 ('systemctl') (unit session-9.scope)... Dec 13 01:32:34.179670 systemd[1]: Reloading... Dec 13 01:32:34.319329 zram_generator::config[3290]: No configuration found. Dec 13 01:32:34.501169 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:32:34.616387 systemd[1]: Reloading finished in 436 ms. Dec 13 01:32:34.668735 kubelet[2874]: I1213 01:32:34.668646 2874 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:32:34.668890 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:32:34.685803 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:32:34.686043 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:32:34.686096 systemd[1]: kubelet.service: Consumed 1.146s CPU time, 108.9M memory peak, 0B memory swap peak. Dec 13 01:32:34.693312 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:32:35.551356 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:32:35.576848 (kubelet)[3344]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:32:35.716989 kubelet[3344]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:32:35.716989 kubelet[3344]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:32:35.716989 kubelet[3344]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:32:35.719726 kubelet[3344]: I1213 01:32:35.717860 3344 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:32:35.742748 kubelet[3344]: I1213 01:32:35.740582 3344 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:32:35.742748 kubelet[3344]: I1213 01:32:35.740616 3344 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:32:35.742748 kubelet[3344]: I1213 01:32:35.741345 3344 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:32:35.749007 kubelet[3344]: I1213 01:32:35.748518 3344 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:32:35.753758 kubelet[3344]: I1213 01:32:35.753155 3344 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:32:35.754652 sudo[3357]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 01:32:35.755132 sudo[3357]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 13 01:32:35.766920 kubelet[3344]: I1213 01:32:35.766608 3344 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:32:35.767217 kubelet[3344]: I1213 01:32:35.767191 3344 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:32:35.767578 kubelet[3344]: I1213 01:32:35.767549 3344 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:32:35.767728 kubelet[3344]: I1213 01:32:35.767584 3344 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:32:35.767728 kubelet[3344]: I1213 01:32:35.767600 3344 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:32:35.767728 kubelet[3344]: I1213 01:32:35.767639 3344 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:32:35.767859 kubelet[3344]: I1213 01:32:35.767768 3344 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:32:35.767859 kubelet[3344]: I1213 01:32:35.767787 3344 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:32:35.767973 kubelet[3344]: I1213 01:32:35.767946 3344 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:32:35.768063 kubelet[3344]: I1213 01:32:35.768040 3344 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:32:35.770654 kubelet[3344]: I1213 01:32:35.769974 3344 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:32:35.770654 kubelet[3344]: I1213 01:32:35.770254 3344 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:32:35.772517 kubelet[3344]: I1213 01:32:35.771842 3344 server.go:1256] "Started kubelet" Dec 13 01:32:35.779968 kubelet[3344]: I1213 01:32:35.779907 3344 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:32:35.785713 kubelet[3344]: I1213 01:32:35.784936 3344 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:32:35.804139 kubelet[3344]: I1213 01:32:35.803841 3344 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:32:35.809480 kubelet[3344]: I1213 01:32:35.807781 3344 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:32:35.809480 kubelet[3344]: I1213 01:32:35.808039 3344 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:32:35.823204 kubelet[3344]: I1213 01:32:35.821931 3344 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:32:35.823204 kubelet[3344]: I1213 01:32:35.822505 3344 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:32:35.823204 kubelet[3344]: I1213 01:32:35.822878 3344 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:32:35.850907 kubelet[3344]: I1213 01:32:35.850879 3344 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:32:35.855484 kubelet[3344]: I1213 01:32:35.855456 3344 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:32:35.855844 kubelet[3344]: I1213 01:32:35.855745 3344 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:32:35.856054 kubelet[3344]: I1213 01:32:35.856039 3344 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:32:35.856709 kubelet[3344]: E1213 01:32:35.856691 3344 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:32:35.872624 kubelet[3344]: I1213 01:32:35.872596 3344 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:32:35.873346 kubelet[3344]: I1213 01:32:35.872764 3344 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:32:35.873346 kubelet[3344]: I1213 01:32:35.872875 3344 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:32:35.876903 kubelet[3344]: E1213 01:32:35.876876 3344 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:32:35.957167 kubelet[3344]: E1213 01:32:35.956960 3344 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:32:35.960929 kubelet[3344]: I1213 01:32:35.960873 3344 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-36" Dec 13 01:32:35.992884 kubelet[3344]: I1213 01:32:35.992855 3344 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-29-36" Dec 13 01:32:35.993546 kubelet[3344]: I1213 01:32:35.993399 3344 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-29-36" Dec 13 01:32:36.074462 kubelet[3344]: I1213 01:32:36.073819 3344 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:32:36.074462 kubelet[3344]: I1213 01:32:36.073947 3344 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:32:36.074462 kubelet[3344]: I1213 01:32:36.073972 3344 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:32:36.077838 kubelet[3344]: I1213 01:32:36.077429 3344 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:32:36.077838 kubelet[3344]: I1213 01:32:36.077564 3344 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:32:36.077838 kubelet[3344]: I1213 01:32:36.077580 3344 policy_none.go:49] "None policy: Start" Dec 13 01:32:36.083910 kubelet[3344]: I1213 01:32:36.083164 3344 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:32:36.083910 kubelet[3344]: I1213 01:32:36.083208 3344 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:32:36.083910 kubelet[3344]: I1213 01:32:36.083567 3344 state_mem.go:75] "Updated machine memory state" Dec 13 01:32:36.102349 kubelet[3344]: I1213 01:32:36.101904 3344 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:32:36.111788 kubelet[3344]: I1213 01:32:36.111501 3344 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:32:36.160113 kubelet[3344]: I1213 01:32:36.159843 3344 topology_manager.go:215] "Topology Admit Handler" podUID="c8d5c0784e80780a289d2da3a92e981f" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-29-36" Dec 13 01:32:36.161064 kubelet[3344]: I1213 01:32:36.160870 3344 topology_manager.go:215] "Topology Admit Handler" podUID="1d4acbfaef23ef94975c05e4994182cf" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-29-36" Dec 13 01:32:36.162771 kubelet[3344]: I1213 01:32:36.161574 3344 topology_manager.go:215] "Topology Admit Handler" podUID="dd3e0a8b569ebeb8c6321ca8c59c83e5" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-29-36" Dec 13 01:32:36.188885 kubelet[3344]: E1213 01:32:36.188817 3344 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-29-36\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-29-36" Dec 13 01:32:36.226541 kubelet[3344]: I1213 01:32:36.226355 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1d4acbfaef23ef94975c05e4994182cf-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-36\" (UID: \"1d4acbfaef23ef94975c05e4994182cf\") " pod="kube-system/kube-controller-manager-ip-172-31-29-36" Dec 13 01:32:36.226541 kubelet[3344]: I1213 01:32:36.226394 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c8d5c0784e80780a289d2da3a92e981f-ca-certs\") pod \"kube-apiserver-ip-172-31-29-36\" (UID: \"c8d5c0784e80780a289d2da3a92e981f\") " pod="kube-system/kube-apiserver-ip-172-31-29-36" Dec 13 01:32:36.226541 kubelet[3344]: I1213 01:32:36.226422 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c8d5c0784e80780a289d2da3a92e981f-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-36\" (UID: \"c8d5c0784e80780a289d2da3a92e981f\") " pod="kube-system/kube-apiserver-ip-172-31-29-36" Dec 13 01:32:36.226541 kubelet[3344]: I1213 01:32:36.226453 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c8d5c0784e80780a289d2da3a92e981f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-36\" (UID: \"c8d5c0784e80780a289d2da3a92e981f\") " pod="kube-system/kube-apiserver-ip-172-31-29-36" Dec 13 01:32:36.226541 kubelet[3344]: I1213 01:32:36.226473 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1d4acbfaef23ef94975c05e4994182cf-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-36\" (UID: \"1d4acbfaef23ef94975c05e4994182cf\") " pod="kube-system/kube-controller-manager-ip-172-31-29-36" Dec 13 01:32:36.226874 kubelet[3344]: I1213 01:32:36.226497 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1d4acbfaef23ef94975c05e4994182cf-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-36\" (UID: \"1d4acbfaef23ef94975c05e4994182cf\") " pod="kube-system/kube-controller-manager-ip-172-31-29-36" Dec 13 01:32:36.226874 kubelet[3344]: I1213 01:32:36.226538 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1d4acbfaef23ef94975c05e4994182cf-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-36\" (UID: \"1d4acbfaef23ef94975c05e4994182cf\") " pod="kube-system/kube-controller-manager-ip-172-31-29-36" Dec 13 01:32:36.226874 kubelet[3344]: I1213 01:32:36.226567 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1d4acbfaef23ef94975c05e4994182cf-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-36\" (UID: \"1d4acbfaef23ef94975c05e4994182cf\") " pod="kube-system/kube-controller-manager-ip-172-31-29-36" Dec 13 01:32:36.226874 kubelet[3344]: I1213 01:32:36.226597 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3e0a8b569ebeb8c6321ca8c59c83e5-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-36\" (UID: \"dd3e0a8b569ebeb8c6321ca8c59c83e5\") " pod="kube-system/kube-scheduler-ip-172-31-29-36" Dec 13 01:32:36.783036 kubelet[3344]: I1213 01:32:36.782167 3344 apiserver.go:52] "Watching apiserver" Dec 13 01:32:36.789547 sudo[3357]: pam_unix(sudo:session): session closed for user root Dec 13 01:32:36.823658 kubelet[3344]: I1213 01:32:36.823577 3344 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:32:36.948178 kubelet[3344]: E1213 01:32:36.948142 3344 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-29-36\" already exists" pod="kube-system/kube-apiserver-ip-172-31-29-36" Dec 13 01:32:37.006235 kubelet[3344]: I1213 01:32:37.006154 3344 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-29-36" podStartSLOduration=1.006055588 podStartE2EDuration="1.006055588s" podCreationTimestamp="2024-12-13 01:32:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:36.974271727 +0000 UTC m=+1.365582681" watchObservedRunningTime="2024-12-13 01:32:37.006055588 +0000 UTC m=+1.397366535" Dec 13 01:32:37.043943 kubelet[3344]: I1213 01:32:37.042840 3344 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-29-36" podStartSLOduration=1.042788475 podStartE2EDuration="1.042788475s" podCreationTimestamp="2024-12-13 01:32:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:37.007209224 +0000 UTC m=+1.398520178" watchObservedRunningTime="2024-12-13 01:32:37.042788475 +0000 UTC m=+1.434099426" Dec 13 01:32:37.094888 kubelet[3344]: I1213 01:32:37.094116 3344 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-29-36" podStartSLOduration=5.094065689 podStartE2EDuration="5.094065689s" podCreationTimestamp="2024-12-13 01:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:37.044942309 +0000 UTC m=+1.436253267" watchObservedRunningTime="2024-12-13 01:32:37.094065689 +0000 UTC m=+1.485376644" Dec 13 01:32:38.729655 sudo[2320]: pam_unix(sudo:session): session closed for user root Dec 13 01:32:38.753419 sshd[2317]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:38.757326 systemd[1]: sshd@8-172.31.29.36:22-139.178.68.195:47784.service: Deactivated successfully. Dec 13 01:32:38.760031 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:32:38.760595 systemd[1]: session-9.scope: Consumed 5.440s CPU time, 187.1M memory peak, 0B memory swap peak. Dec 13 01:32:38.762489 systemd-logind[1946]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:32:38.764269 systemd-logind[1946]: Removed session 9. Dec 13 01:32:47.038614 kubelet[3344]: I1213 01:32:47.038366 3344 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:32:47.043771 containerd[1957]: time="2024-12-13T01:32:47.043061475Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:32:47.044181 kubelet[3344]: I1213 01:32:47.043518 3344 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:32:47.156549 kubelet[3344]: I1213 01:32:47.156153 3344 topology_manager.go:215] "Topology Admit Handler" podUID="cd71ab8c-d91a-4865-8368-5eae541637e5" podNamespace="kube-system" podName="kube-proxy-9lts9" Dec 13 01:32:47.176149 systemd[1]: Created slice kubepods-besteffort-podcd71ab8c_d91a_4865_8368_5eae541637e5.slice - libcontainer container kubepods-besteffort-podcd71ab8c_d91a_4865_8368_5eae541637e5.slice. Dec 13 01:32:47.198615 kubelet[3344]: I1213 01:32:47.197885 3344 topology_manager.go:215] "Topology Admit Handler" podUID="e3103c01-c084-486e-82d5-eb4738245941" podNamespace="kube-system" podName="cilium-9c9jq" Dec 13 01:32:47.221064 kubelet[3344]: W1213 01:32:47.220718 3344 reflector.go:539] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-29-36" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-29-36' and this object Dec 13 01:32:47.221064 kubelet[3344]: E1213 01:32:47.220769 3344 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-29-36" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-29-36' and this object Dec 13 01:32:47.226477 kubelet[3344]: I1213 01:32:47.226241 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e3103c01-c084-486e-82d5-eb4738245941-cni-path\") pod \"cilium-9c9jq\" (UID: \"e3103c01-c084-486e-82d5-eb4738245941\") " pod="kube-system/cilium-9c9jq" Dec 13 01:32:47.226477 kubelet[3344]: I1213 01:32:47.226429 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e3103c01-c084-486e-82d5-eb4738245941-hubble-tls\") pod \"cilium-9c9jq\" (UID: \"e3103c01-c084-486e-82d5-eb4738245941\") " pod="kube-system/cilium-9c9jq" Dec 13 01:32:47.227226 kubelet[3344]: I1213 01:32:47.226935 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e3103c01-c084-486e-82d5-eb4738245941-cilium-run\") pod \"cilium-9c9jq\" (UID: \"e3103c01-c084-486e-82d5-eb4738245941\") " pod="kube-system/cilium-9c9jq" Dec 13 01:32:47.227226 kubelet[3344]: I1213 01:32:47.227000 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3103c01-c084-486e-82d5-eb4738245941-etc-cni-netd\") pod \"cilium-9c9jq\" (UID: \"e3103c01-c084-486e-82d5-eb4738245941\") " pod="kube-system/cilium-9c9jq" Dec 13 01:32:47.227226 kubelet[3344]: I1213 01:32:47.227030 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e3103c01-c084-486e-82d5-eb4738245941-clustermesh-secrets\") pod \"cilium-9c9jq\" (UID: \"e3103c01-c084-486e-82d5-eb4738245941\") " pod="kube-system/cilium-9c9jq" Dec 13 01:32:47.227226 kubelet[3344]: I1213 01:32:47.227082 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnnsv\" (UniqueName: \"kubernetes.io/projected/cd71ab8c-d91a-4865-8368-5eae541637e5-kube-api-access-fnnsv\") pod \"kube-proxy-9lts9\" (UID: \"cd71ab8c-d91a-4865-8368-5eae541637e5\") " pod="kube-system/kube-proxy-9lts9" Dec 13 01:32:47.227226 kubelet[3344]: I1213 01:32:47.227112 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e3103c01-c084-486e-82d5-eb4738245941-cilium-cgroup\") pod \"cilium-9c9jq\" (UID: \"e3103c01-c084-486e-82d5-eb4738245941\") " pod="kube-system/cilium-9c9jq" Dec 13 01:32:47.227226 kubelet[3344]: I1213 01:32:47.227163 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3103c01-c084-486e-82d5-eb4738245941-lib-modules\") pod \"cilium-9c9jq\" (UID: \"e3103c01-c084-486e-82d5-eb4738245941\") " pod="kube-system/cilium-9c9jq" Dec 13 01:32:47.229398 kubelet[3344]: I1213 01:32:47.228995 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e3103c01-c084-486e-82d5-eb4738245941-host-proc-sys-kernel\") pod \"cilium-9c9jq\" (UID: \"e3103c01-c084-486e-82d5-eb4738245941\") " pod="kube-system/cilium-9c9jq" Dec 13 01:32:47.229398 kubelet[3344]: I1213 01:32:47.229092 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jckm\" (UniqueName: \"kubernetes.io/projected/e3103c01-c084-486e-82d5-eb4738245941-kube-api-access-6jckm\") pod \"cilium-9c9jq\" (UID: \"e3103c01-c084-486e-82d5-eb4738245941\") " pod="kube-system/cilium-9c9jq" Dec 13 01:32:47.229398 kubelet[3344]: I1213 01:32:47.229368 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cd71ab8c-d91a-4865-8368-5eae541637e5-kube-proxy\") pod \"kube-proxy-9lts9\" (UID: \"cd71ab8c-d91a-4865-8368-5eae541637e5\") " pod="kube-system/kube-proxy-9lts9" Dec 13 01:32:47.231949 kubelet[3344]: I1213 01:32:47.230408 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e3103c01-c084-486e-82d5-eb4738245941-bpf-maps\") pod \"cilium-9c9jq\" (UID: \"e3103c01-c084-486e-82d5-eb4738245941\") " pod="kube-system/cilium-9c9jq" Dec 13 01:32:47.231949 kubelet[3344]: I1213 01:32:47.230456 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e3103c01-c084-486e-82d5-eb4738245941-hostproc\") pod \"cilium-9c9jq\" (UID: \"e3103c01-c084-486e-82d5-eb4738245941\") " pod="kube-system/cilium-9c9jq" Dec 13 01:32:47.231949 kubelet[3344]: I1213 01:32:47.230485 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e3103c01-c084-486e-82d5-eb4738245941-host-proc-sys-net\") pod \"cilium-9c9jq\" (UID: \"e3103c01-c084-486e-82d5-eb4738245941\") " pod="kube-system/cilium-9c9jq" Dec 13 01:32:47.231949 kubelet[3344]: I1213 01:32:47.230518 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd71ab8c-d91a-4865-8368-5eae541637e5-lib-modules\") pod \"kube-proxy-9lts9\" (UID: \"cd71ab8c-d91a-4865-8368-5eae541637e5\") " pod="kube-system/kube-proxy-9lts9" Dec 13 01:32:47.231949 kubelet[3344]: I1213 01:32:47.230547 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd71ab8c-d91a-4865-8368-5eae541637e5-xtables-lock\") pod \"kube-proxy-9lts9\" (UID: \"cd71ab8c-d91a-4865-8368-5eae541637e5\") " pod="kube-system/kube-proxy-9lts9" Dec 13 01:32:47.231949 kubelet[3344]: I1213 01:32:47.230573 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3103c01-c084-486e-82d5-eb4738245941-xtables-lock\") pod \"cilium-9c9jq\" (UID: \"e3103c01-c084-486e-82d5-eb4738245941\") " pod="kube-system/cilium-9c9jq" Dec 13 01:32:47.232326 kubelet[3344]: I1213 01:32:47.230609 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3103c01-c084-486e-82d5-eb4738245941-cilium-config-path\") pod \"cilium-9c9jq\" (UID: \"e3103c01-c084-486e-82d5-eb4738245941\") " pod="kube-system/cilium-9c9jq" Dec 13 01:32:47.241821 systemd[1]: Created slice kubepods-burstable-pode3103c01_c084_486e_82d5_eb4738245941.slice - libcontainer container kubepods-burstable-pode3103c01_c084_486e_82d5_eb4738245941.slice. Dec 13 01:32:47.433786 kubelet[3344]: I1213 01:32:47.433556 3344 topology_manager.go:215] "Topology Admit Handler" podUID="a5719724-ea25-44fb-b01a-887e072b33c9" podNamespace="kube-system" podName="cilium-operator-5cc964979-ntzwc" Dec 13 01:32:47.449706 systemd[1]: Created slice kubepods-besteffort-poda5719724_ea25_44fb_b01a_887e072b33c9.slice - libcontainer container kubepods-besteffort-poda5719724_ea25_44fb_b01a_887e072b33c9.slice. Dec 13 01:32:47.500909 containerd[1957]: time="2024-12-13T01:32:47.500865571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9lts9,Uid:cd71ab8c-d91a-4865-8368-5eae541637e5,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:47.539321 kubelet[3344]: I1213 01:32:47.536505 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65hx7\" (UniqueName: \"kubernetes.io/projected/a5719724-ea25-44fb-b01a-887e072b33c9-kube-api-access-65hx7\") pod \"cilium-operator-5cc964979-ntzwc\" (UID: \"a5719724-ea25-44fb-b01a-887e072b33c9\") " pod="kube-system/cilium-operator-5cc964979-ntzwc" Dec 13 01:32:47.539321 kubelet[3344]: I1213 01:32:47.536790 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a5719724-ea25-44fb-b01a-887e072b33c9-cilium-config-path\") pod \"cilium-operator-5cc964979-ntzwc\" (UID: \"a5719724-ea25-44fb-b01a-887e072b33c9\") " pod="kube-system/cilium-operator-5cc964979-ntzwc" Dec 13 01:32:47.610725 containerd[1957]: time="2024-12-13T01:32:47.610532818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:47.610949 containerd[1957]: time="2024-12-13T01:32:47.610748175Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:47.610949 containerd[1957]: time="2024-12-13T01:32:47.610783431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:47.611499 containerd[1957]: time="2024-12-13T01:32:47.610917035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:47.667090 systemd[1]: Started cri-containerd-b7612a431136772a5be7e7471710f2de4e3fbecb6ee51eb7a631c56d35727560.scope - libcontainer container b7612a431136772a5be7e7471710f2de4e3fbecb6ee51eb7a631c56d35727560. Dec 13 01:32:47.708936 containerd[1957]: time="2024-12-13T01:32:47.708740726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9lts9,Uid:cd71ab8c-d91a-4865-8368-5eae541637e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7612a431136772a5be7e7471710f2de4e3fbecb6ee51eb7a631c56d35727560\"" Dec 13 01:32:47.714142 containerd[1957]: time="2024-12-13T01:32:47.714098803Z" level=info msg="CreateContainer within sandbox \"b7612a431136772a5be7e7471710f2de4e3fbecb6ee51eb7a631c56d35727560\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:32:47.741522 containerd[1957]: time="2024-12-13T01:32:47.741470755Z" level=info msg="CreateContainer within sandbox \"b7612a431136772a5be7e7471710f2de4e3fbecb6ee51eb7a631c56d35727560\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7d44354cdaceb0fc839e66e5b3f4afd3e70fc5c3925338a78d5c601e20236f4d\"" Dec 13 01:32:47.742496 containerd[1957]: time="2024-12-13T01:32:47.742453297Z" level=info msg="StartContainer for \"7d44354cdaceb0fc839e66e5b3f4afd3e70fc5c3925338a78d5c601e20236f4d\"" Dec 13 01:32:47.757802 containerd[1957]: time="2024-12-13T01:32:47.757752577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-ntzwc,Uid:a5719724-ea25-44fb-b01a-887e072b33c9,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:47.778509 systemd[1]: Started cri-containerd-7d44354cdaceb0fc839e66e5b3f4afd3e70fc5c3925338a78d5c601e20236f4d.scope - libcontainer container 7d44354cdaceb0fc839e66e5b3f4afd3e70fc5c3925338a78d5c601e20236f4d. Dec 13 01:32:47.814311 containerd[1957]: time="2024-12-13T01:32:47.812516166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:47.814311 containerd[1957]: time="2024-12-13T01:32:47.813956680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:47.814311 containerd[1957]: time="2024-12-13T01:32:47.813987360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:47.814311 containerd[1957]: time="2024-12-13T01:32:47.814094196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:47.844511 systemd[1]: Started cri-containerd-e9c4b6d4763b6f6a65271cf09d0c3a0f518f6df01d93895b3f0e2eaf69fd2b77.scope - libcontainer container e9c4b6d4763b6f6a65271cf09d0c3a0f518f6df01d93895b3f0e2eaf69fd2b77. Dec 13 01:32:47.857549 containerd[1957]: time="2024-12-13T01:32:47.857487160Z" level=info msg="StartContainer for \"7d44354cdaceb0fc839e66e5b3f4afd3e70fc5c3925338a78d5c601e20236f4d\" returns successfully" Dec 13 01:32:47.921406 containerd[1957]: time="2024-12-13T01:32:47.921365252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-ntzwc,Uid:a5719724-ea25-44fb-b01a-887e072b33c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9c4b6d4763b6f6a65271cf09d0c3a0f518f6df01d93895b3f0e2eaf69fd2b77\"" Dec 13 01:32:47.934574 containerd[1957]: time="2024-12-13T01:32:47.934542756Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 01:32:47.969893 kubelet[3344]: I1213 01:32:47.969449 3344 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-9lts9" podStartSLOduration=0.969399925 podStartE2EDuration="969.399925ms" podCreationTimestamp="2024-12-13 01:32:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:32:47.969337422 +0000 UTC m=+12.360648380" watchObservedRunningTime="2024-12-13 01:32:47.969399925 +0000 UTC m=+12.360710883" Dec 13 01:32:48.453589 containerd[1957]: time="2024-12-13T01:32:48.453536278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9c9jq,Uid:e3103c01-c084-486e-82d5-eb4738245941,Namespace:kube-system,Attempt:0,}" Dec 13 01:32:48.520983 containerd[1957]: time="2024-12-13T01:32:48.520863680Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:32:48.522463 containerd[1957]: time="2024-12-13T01:32:48.521865680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:32:48.522463 containerd[1957]: time="2024-12-13T01:32:48.521947956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:48.522463 containerd[1957]: time="2024-12-13T01:32:48.522092248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:32:48.579668 systemd[1]: Started cri-containerd-c4c1bdcd99486f2a52bc163f9b82eaa77ee736e9e6777808f39c583d3707fad1.scope - libcontainer container c4c1bdcd99486f2a52bc163f9b82eaa77ee736e9e6777808f39c583d3707fad1. Dec 13 01:32:48.638373 containerd[1957]: time="2024-12-13T01:32:48.638319322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9c9jq,Uid:e3103c01-c084-486e-82d5-eb4738245941,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4c1bdcd99486f2a52bc163f9b82eaa77ee736e9e6777808f39c583d3707fad1\"" Dec 13 01:32:50.568571 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1284731225.mount: Deactivated successfully. Dec 13 01:32:51.453548 containerd[1957]: time="2024-12-13T01:32:51.453487424Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:51.456899 containerd[1957]: time="2024-12-13T01:32:51.456428814Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18906605" Dec 13 01:32:51.459575 containerd[1957]: time="2024-12-13T01:32:51.459526471Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:32:51.462416 containerd[1957]: time="2024-12-13T01:32:51.462371815Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.52762231s" Dec 13 01:32:51.462660 containerd[1957]: time="2024-12-13T01:32:51.462556647Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 01:32:51.471960 containerd[1957]: time="2024-12-13T01:32:51.471216139Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 01:32:51.491126 containerd[1957]: time="2024-12-13T01:32:51.490569148Z" level=info msg="CreateContainer within sandbox \"e9c4b6d4763b6f6a65271cf09d0c3a0f518f6df01d93895b3f0e2eaf69fd2b77\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 01:32:51.524138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount701613907.mount: Deactivated successfully. Dec 13 01:32:51.532457 containerd[1957]: time="2024-12-13T01:32:51.531898533Z" level=info msg="CreateContainer within sandbox \"e9c4b6d4763b6f6a65271cf09d0c3a0f518f6df01d93895b3f0e2eaf69fd2b77\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9787a3312142f43f0711b6e2cbc723dad1887b913b3d6225fd34291e9d9286ba\"" Dec 13 01:32:51.536170 containerd[1957]: time="2024-12-13T01:32:51.536117491Z" level=info msg="StartContainer for \"9787a3312142f43f0711b6e2cbc723dad1887b913b3d6225fd34291e9d9286ba\"" Dec 13 01:32:51.616228 systemd[1]: run-containerd-runc-k8s.io-9787a3312142f43f0711b6e2cbc723dad1887b913b3d6225fd34291e9d9286ba-runc.lfOmRV.mount: Deactivated successfully. Dec 13 01:32:51.628534 systemd[1]: Started cri-containerd-9787a3312142f43f0711b6e2cbc723dad1887b913b3d6225fd34291e9d9286ba.scope - libcontainer container 9787a3312142f43f0711b6e2cbc723dad1887b913b3d6225fd34291e9d9286ba. Dec 13 01:32:51.668623 containerd[1957]: time="2024-12-13T01:32:51.668562936Z" level=info msg="StartContainer for \"9787a3312142f43f0711b6e2cbc723dad1887b913b3d6225fd34291e9d9286ba\" returns successfully" Dec 13 01:32:52.029352 kubelet[3344]: I1213 01:32:52.029306 3344 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-ntzwc" podStartSLOduration=1.483160282 podStartE2EDuration="5.029224749s" podCreationTimestamp="2024-12-13 01:32:47 +0000 UTC" firstStartedPulling="2024-12-13 01:32:47.923953189 +0000 UTC m=+12.315264136" lastFinishedPulling="2024-12-13 01:32:51.470017655 +0000 UTC m=+15.861328603" observedRunningTime="2024-12-13 01:32:52.020333911 +0000 UTC m=+16.411644873" watchObservedRunningTime="2024-12-13 01:32:52.029224749 +0000 UTC m=+16.420535721" Dec 13 01:33:01.437072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4116169368.mount: Deactivated successfully. Dec 13 01:33:05.998444 containerd[1957]: time="2024-12-13T01:33:05.998375751Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:06.019485 containerd[1957]: time="2024-12-13T01:33:06.019166350Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735307" Dec 13 01:33:06.031977 containerd[1957]: time="2024-12-13T01:33:06.031544201Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:33:06.033796 containerd[1957]: time="2024-12-13T01:33:06.033751716Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 14.56248291s" Dec 13 01:33:06.034244 containerd[1957]: time="2024-12-13T01:33:06.033965952Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 01:33:06.045111 containerd[1957]: time="2024-12-13T01:33:06.044921305Z" level=info msg="CreateContainer within sandbox \"c4c1bdcd99486f2a52bc163f9b82eaa77ee736e9e6777808f39c583d3707fad1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:33:06.221530 containerd[1957]: time="2024-12-13T01:33:06.221474267Z" level=info msg="CreateContainer within sandbox \"c4c1bdcd99486f2a52bc163f9b82eaa77ee736e9e6777808f39c583d3707fad1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"58ff625eeeea4155ec5f1ab8408b4271eb3c2f631d28c04c8914a83877aa2455\"" Dec 13 01:33:06.223632 containerd[1957]: time="2024-12-13T01:33:06.223590747Z" level=info msg="StartContainer for \"58ff625eeeea4155ec5f1ab8408b4271eb3c2f631d28c04c8914a83877aa2455\"" Dec 13 01:33:06.407914 systemd[1]: run-containerd-runc-k8s.io-58ff625eeeea4155ec5f1ab8408b4271eb3c2f631d28c04c8914a83877aa2455-runc.Badr1O.mount: Deactivated successfully. Dec 13 01:33:06.420717 systemd[1]: Started cri-containerd-58ff625eeeea4155ec5f1ab8408b4271eb3c2f631d28c04c8914a83877aa2455.scope - libcontainer container 58ff625eeeea4155ec5f1ab8408b4271eb3c2f631d28c04c8914a83877aa2455. Dec 13 01:33:06.503371 containerd[1957]: time="2024-12-13T01:33:06.503278849Z" level=info msg="StartContainer for \"58ff625eeeea4155ec5f1ab8408b4271eb3c2f631d28c04c8914a83877aa2455\" returns successfully" Dec 13 01:33:06.564174 systemd[1]: cri-containerd-58ff625eeeea4155ec5f1ab8408b4271eb3c2f631d28c04c8914a83877aa2455.scope: Deactivated successfully. Dec 13 01:33:07.175572 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58ff625eeeea4155ec5f1ab8408b4271eb3c2f631d28c04c8914a83877aa2455-rootfs.mount: Deactivated successfully. Dec 13 01:33:07.941835 containerd[1957]: time="2024-12-13T01:33:07.917549464Z" level=info msg="shim disconnected" id=58ff625eeeea4155ec5f1ab8408b4271eb3c2f631d28c04c8914a83877aa2455 namespace=k8s.io Dec 13 01:33:07.941835 containerd[1957]: time="2024-12-13T01:33:07.941781575Z" level=warning msg="cleaning up after shim disconnected" id=58ff625eeeea4155ec5f1ab8408b4271eb3c2f631d28c04c8914a83877aa2455 namespace=k8s.io Dec 13 01:33:07.941835 containerd[1957]: time="2024-12-13T01:33:07.941802423Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:33:08.083905 containerd[1957]: time="2024-12-13T01:33:08.083657232Z" level=info msg="CreateContainer within sandbox \"c4c1bdcd99486f2a52bc163f9b82eaa77ee736e9e6777808f39c583d3707fad1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:33:08.114853 containerd[1957]: time="2024-12-13T01:33:08.114805715Z" level=info msg="CreateContainer within sandbox \"c4c1bdcd99486f2a52bc163f9b82eaa77ee736e9e6777808f39c583d3707fad1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b51649b98512fe8ed4572b3c4bd9650fdad29215a72efba0098be1c104362475\"" Dec 13 01:33:08.120165 containerd[1957]: time="2024-12-13T01:33:08.118971307Z" level=info msg="StartContainer for \"b51649b98512fe8ed4572b3c4bd9650fdad29215a72efba0098be1c104362475\"" Dec 13 01:33:08.167667 systemd[1]: Started cri-containerd-b51649b98512fe8ed4572b3c4bd9650fdad29215a72efba0098be1c104362475.scope - libcontainer container b51649b98512fe8ed4572b3c4bd9650fdad29215a72efba0098be1c104362475. Dec 13 01:33:08.175610 systemd[1]: run-containerd-runc-k8s.io-b51649b98512fe8ed4572b3c4bd9650fdad29215a72efba0098be1c104362475-runc.ypmTyZ.mount: Deactivated successfully. Dec 13 01:33:08.216469 containerd[1957]: time="2024-12-13T01:33:08.216238528Z" level=info msg="StartContainer for \"b51649b98512fe8ed4572b3c4bd9650fdad29215a72efba0098be1c104362475\" returns successfully" Dec 13 01:33:08.234652 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:33:08.234926 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:33:08.235010 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:33:08.241494 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:33:08.296113 systemd[1]: cri-containerd-b51649b98512fe8ed4572b3c4bd9650fdad29215a72efba0098be1c104362475.scope: Deactivated successfully. Dec 13 01:33:08.298265 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:33:08.331017 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b51649b98512fe8ed4572b3c4bd9650fdad29215a72efba0098be1c104362475-rootfs.mount: Deactivated successfully. Dec 13 01:33:08.335168 containerd[1957]: time="2024-12-13T01:33:08.335105883Z" level=info msg="shim disconnected" id=b51649b98512fe8ed4572b3c4bd9650fdad29215a72efba0098be1c104362475 namespace=k8s.io Dec 13 01:33:08.335168 containerd[1957]: time="2024-12-13T01:33:08.335156286Z" level=warning msg="cleaning up after shim disconnected" id=b51649b98512fe8ed4572b3c4bd9650fdad29215a72efba0098be1c104362475 namespace=k8s.io Dec 13 01:33:08.335168 containerd[1957]: time="2024-12-13T01:33:08.335169141Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:33:09.085332 containerd[1957]: time="2024-12-13T01:33:09.085107847Z" level=info msg="CreateContainer within sandbox \"c4c1bdcd99486f2a52bc163f9b82eaa77ee736e9e6777808f39c583d3707fad1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:33:09.132940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1714578029.mount: Deactivated successfully. Dec 13 01:33:09.139364 containerd[1957]: time="2024-12-13T01:33:09.139313338Z" level=info msg="CreateContainer within sandbox \"c4c1bdcd99486f2a52bc163f9b82eaa77ee736e9e6777808f39c583d3707fad1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bfab08c37bea8356ba21a8da38494e96660d9f1c14b3a837c3d26f8e4d56f586\"" Dec 13 01:33:09.144648 containerd[1957]: time="2024-12-13T01:33:09.144595039Z" level=info msg="StartContainer for \"bfab08c37bea8356ba21a8da38494e96660d9f1c14b3a837c3d26f8e4d56f586\"" Dec 13 01:33:09.196767 systemd[1]: Started cri-containerd-bfab08c37bea8356ba21a8da38494e96660d9f1c14b3a837c3d26f8e4d56f586.scope - libcontainer container bfab08c37bea8356ba21a8da38494e96660d9f1c14b3a837c3d26f8e4d56f586. Dec 13 01:33:09.241760 containerd[1957]: time="2024-12-13T01:33:09.241633679Z" level=info msg="StartContainer for \"bfab08c37bea8356ba21a8da38494e96660d9f1c14b3a837c3d26f8e4d56f586\" returns successfully" Dec 13 01:33:09.253166 systemd[1]: cri-containerd-bfab08c37bea8356ba21a8da38494e96660d9f1c14b3a837c3d26f8e4d56f586.scope: Deactivated successfully. Dec 13 01:33:09.290536 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bfab08c37bea8356ba21a8da38494e96660d9f1c14b3a837c3d26f8e4d56f586-rootfs.mount: Deactivated successfully. Dec 13 01:33:09.309433 containerd[1957]: time="2024-12-13T01:33:09.309143497Z" level=info msg="shim disconnected" id=bfab08c37bea8356ba21a8da38494e96660d9f1c14b3a837c3d26f8e4d56f586 namespace=k8s.io Dec 13 01:33:09.309433 containerd[1957]: time="2024-12-13T01:33:09.309204829Z" level=warning msg="cleaning up after shim disconnected" id=bfab08c37bea8356ba21a8da38494e96660d9f1c14b3a837c3d26f8e4d56f586 namespace=k8s.io Dec 13 01:33:09.309433 containerd[1957]: time="2024-12-13T01:33:09.309216540Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:33:10.101067 containerd[1957]: time="2024-12-13T01:33:10.101016584Z" level=info msg="CreateContainer within sandbox \"c4c1bdcd99486f2a52bc163f9b82eaa77ee736e9e6777808f39c583d3707fad1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:33:10.153659 containerd[1957]: time="2024-12-13T01:33:10.153610984Z" level=info msg="CreateContainer within sandbox \"c4c1bdcd99486f2a52bc163f9b82eaa77ee736e9e6777808f39c583d3707fad1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2e7a611292a900db79fe22e9e00c0518694691a415345d74b2fe9f8c2f0f424f\"" Dec 13 01:33:10.158352 containerd[1957]: time="2024-12-13T01:33:10.155020033Z" level=info msg="StartContainer for \"2e7a611292a900db79fe22e9e00c0518694691a415345d74b2fe9f8c2f0f424f\"" Dec 13 01:33:10.231584 systemd[1]: Started cri-containerd-2e7a611292a900db79fe22e9e00c0518694691a415345d74b2fe9f8c2f0f424f.scope - libcontainer container 2e7a611292a900db79fe22e9e00c0518694691a415345d74b2fe9f8c2f0f424f. Dec 13 01:33:10.290844 systemd[1]: cri-containerd-2e7a611292a900db79fe22e9e00c0518694691a415345d74b2fe9f8c2f0f424f.scope: Deactivated successfully. Dec 13 01:33:10.294782 containerd[1957]: time="2024-12-13T01:33:10.294740268Z" level=info msg="StartContainer for \"2e7a611292a900db79fe22e9e00c0518694691a415345d74b2fe9f8c2f0f424f\" returns successfully" Dec 13 01:33:10.330102 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e7a611292a900db79fe22e9e00c0518694691a415345d74b2fe9f8c2f0f424f-rootfs.mount: Deactivated successfully. Dec 13 01:33:10.342532 containerd[1957]: time="2024-12-13T01:33:10.342440185Z" level=info msg="shim disconnected" id=2e7a611292a900db79fe22e9e00c0518694691a415345d74b2fe9f8c2f0f424f namespace=k8s.io Dec 13 01:33:10.342532 containerd[1957]: time="2024-12-13T01:33:10.342516291Z" level=warning msg="cleaning up after shim disconnected" id=2e7a611292a900db79fe22e9e00c0518694691a415345d74b2fe9f8c2f0f424f namespace=k8s.io Dec 13 01:33:10.342532 containerd[1957]: time="2024-12-13T01:33:10.342530023Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:33:10.363627 containerd[1957]: time="2024-12-13T01:33:10.363495590Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:33:10Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:33:11.110564 containerd[1957]: time="2024-12-13T01:33:11.110437318Z" level=info msg="CreateContainer within sandbox \"c4c1bdcd99486f2a52bc163f9b82eaa77ee736e9e6777808f39c583d3707fad1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:33:11.165393 containerd[1957]: time="2024-12-13T01:33:11.163264761Z" level=info msg="CreateContainer within sandbox \"c4c1bdcd99486f2a52bc163f9b82eaa77ee736e9e6777808f39c583d3707fad1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2c7f401fe1f748b5aa5ba93fcf458e3324b98a4f2174de0a02946c18b1947795\"" Dec 13 01:33:11.167164 containerd[1957]: time="2024-12-13T01:33:11.166804129Z" level=info msg="StartContainer for \"2c7f401fe1f748b5aa5ba93fcf458e3324b98a4f2174de0a02946c18b1947795\"" Dec 13 01:33:11.227777 systemd[1]: Started cri-containerd-2c7f401fe1f748b5aa5ba93fcf458e3324b98a4f2174de0a02946c18b1947795.scope - libcontainer container 2c7f401fe1f748b5aa5ba93fcf458e3324b98a4f2174de0a02946c18b1947795. Dec 13 01:33:11.277713 containerd[1957]: time="2024-12-13T01:33:11.277619850Z" level=info msg="StartContainer for \"2c7f401fe1f748b5aa5ba93fcf458e3324b98a4f2174de0a02946c18b1947795\" returns successfully" Dec 13 01:33:11.386930 systemd[1]: run-containerd-runc-k8s.io-2c7f401fe1f748b5aa5ba93fcf458e3324b98a4f2174de0a02946c18b1947795-runc.6pAWCf.mount: Deactivated successfully. Dec 13 01:33:11.569460 kubelet[3344]: I1213 01:33:11.569424 3344 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:33:11.615764 kubelet[3344]: I1213 01:33:11.615713 3344 topology_manager.go:215] "Topology Admit Handler" podUID="4e07e2b6-7463-46e7-b83f-d2250171da40" podNamespace="kube-system" podName="coredns-76f75df574-px6z6" Dec 13 01:33:11.634739 kubelet[3344]: I1213 01:33:11.632243 3344 topology_manager.go:215] "Topology Admit Handler" podUID="b2c17690-4f8b-4fd9-a6e4-751163da93da" podNamespace="kube-system" podName="coredns-76f75df574-2l6ks" Dec 13 01:33:11.637391 systemd[1]: Created slice kubepods-burstable-pod4e07e2b6_7463_46e7_b83f_d2250171da40.slice - libcontainer container kubepods-burstable-pod4e07e2b6_7463_46e7_b83f_d2250171da40.slice. Dec 13 01:33:11.649316 systemd[1]: Created slice kubepods-burstable-podb2c17690_4f8b_4fd9_a6e4_751163da93da.slice - libcontainer container kubepods-burstable-podb2c17690_4f8b_4fd9_a6e4_751163da93da.slice. Dec 13 01:33:11.730536 kubelet[3344]: I1213 01:33:11.730472 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2c17690-4f8b-4fd9-a6e4-751163da93da-config-volume\") pod \"coredns-76f75df574-2l6ks\" (UID: \"b2c17690-4f8b-4fd9-a6e4-751163da93da\") " pod="kube-system/coredns-76f75df574-2l6ks" Dec 13 01:33:11.731284 kubelet[3344]: I1213 01:33:11.730674 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e07e2b6-7463-46e7-b83f-d2250171da40-config-volume\") pod \"coredns-76f75df574-px6z6\" (UID: \"4e07e2b6-7463-46e7-b83f-d2250171da40\") " pod="kube-system/coredns-76f75df574-px6z6" Dec 13 01:33:11.731284 kubelet[3344]: I1213 01:33:11.731004 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dqkm\" (UniqueName: \"kubernetes.io/projected/4e07e2b6-7463-46e7-b83f-d2250171da40-kube-api-access-8dqkm\") pod \"coredns-76f75df574-px6z6\" (UID: \"4e07e2b6-7463-46e7-b83f-d2250171da40\") " pod="kube-system/coredns-76f75df574-px6z6" Dec 13 01:33:11.731284 kubelet[3344]: I1213 01:33:11.731174 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws7k5\" (UniqueName: \"kubernetes.io/projected/b2c17690-4f8b-4fd9-a6e4-751163da93da-kube-api-access-ws7k5\") pod \"coredns-76f75df574-2l6ks\" (UID: \"b2c17690-4f8b-4fd9-a6e4-751163da93da\") " pod="kube-system/coredns-76f75df574-2l6ks" Dec 13 01:33:11.944338 containerd[1957]: time="2024-12-13T01:33:11.944182238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-px6z6,Uid:4e07e2b6-7463-46e7-b83f-d2250171da40,Namespace:kube-system,Attempt:0,}" Dec 13 01:33:11.969007 containerd[1957]: time="2024-12-13T01:33:11.968957742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2l6ks,Uid:b2c17690-4f8b-4fd9-a6e4-751163da93da,Namespace:kube-system,Attempt:0,}" Dec 13 01:33:12.223485 kubelet[3344]: I1213 01:33:12.223259 3344 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-9c9jq" podStartSLOduration=7.830205588 podStartE2EDuration="25.221771056s" podCreationTimestamp="2024-12-13 01:32:47 +0000 UTC" firstStartedPulling="2024-12-13 01:32:48.643111145 +0000 UTC m=+13.034422099" lastFinishedPulling="2024-12-13 01:33:06.034676622 +0000 UTC m=+30.425987567" observedRunningTime="2024-12-13 01:33:12.221634595 +0000 UTC m=+36.612945549" watchObservedRunningTime="2024-12-13 01:33:12.221771056 +0000 UTC m=+36.613082010" Dec 13 01:33:14.035116 systemd-networkd[1809]: cilium_host: Link UP Dec 13 01:33:14.035354 systemd-networkd[1809]: cilium_net: Link UP Dec 13 01:33:14.036878 systemd-networkd[1809]: cilium_net: Gained carrier Dec 13 01:33:14.038651 systemd-networkd[1809]: cilium_host: Gained carrier Dec 13 01:33:14.039086 (udev-worker)[4112]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:33:14.040427 (udev-worker)[4169]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:33:14.201900 systemd-networkd[1809]: cilium_vxlan: Link UP Dec 13 01:33:14.201910 systemd-networkd[1809]: cilium_vxlan: Gained carrier Dec 13 01:33:14.282622 systemd-networkd[1809]: cilium_net: Gained IPv6LL Dec 13 01:33:14.386492 systemd-networkd[1809]: cilium_host: Gained IPv6LL Dec 13 01:33:14.808461 kernel: NET: Registered PF_ALG protocol family Dec 13 01:33:15.643355 systemd-networkd[1809]: cilium_vxlan: Gained IPv6LL Dec 13 01:33:15.837490 systemd-networkd[1809]: lxc_health: Link UP Dec 13 01:33:15.844373 (udev-worker)[4201]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:33:15.845945 systemd-networkd[1809]: lxc_health: Gained carrier Dec 13 01:33:16.603676 systemd-networkd[1809]: lxcc2908d7da0a6: Link UP Dec 13 01:33:16.608316 kernel: eth0: renamed from tmp5f78e Dec 13 01:33:16.614615 systemd-networkd[1809]: lxcc2908d7da0a6: Gained carrier Dec 13 01:33:16.653827 systemd-networkd[1809]: lxc16aee1addafa: Link UP Dec 13 01:33:16.658333 kernel: eth0: renamed from tmp584bf Dec 13 01:33:16.662143 systemd-networkd[1809]: lxc16aee1addafa: Gained carrier Dec 13 01:33:17.690582 systemd-networkd[1809]: lxc_health: Gained IPv6LL Dec 13 01:33:17.692500 systemd-networkd[1809]: lxc16aee1addafa: Gained IPv6LL Dec 13 01:33:18.655767 systemd-networkd[1809]: lxcc2908d7da0a6: Gained IPv6LL Dec 13 01:33:20.368506 systemd[1]: Started sshd@9-172.31.29.36:22-139.178.68.195:39650.service - OpenSSH per-connection server daemon (139.178.68.195:39650). Dec 13 01:33:20.617518 sshd[4533]: Accepted publickey for core from 139.178.68.195 port 39650 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:33:20.621700 sshd[4533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:20.640374 systemd-logind[1946]: New session 10 of user core. Dec 13 01:33:20.644581 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:33:21.427108 ntpd[1940]: Listen normally on 7 cilium_host 192.168.0.190:123 Dec 13 01:33:21.427235 ntpd[1940]: Listen normally on 8 cilium_net [fe80::b4c5:deff:fee1:ba8c%4]:123 Dec 13 01:33:21.427660 ntpd[1940]: 13 Dec 01:33:21 ntpd[1940]: Listen normally on 7 cilium_host 192.168.0.190:123 Dec 13 01:33:21.427660 ntpd[1940]: 13 Dec 01:33:21 ntpd[1940]: Listen normally on 8 cilium_net [fe80::b4c5:deff:fee1:ba8c%4]:123 Dec 13 01:33:21.427832 ntpd[1940]: Listen normally on 9 cilium_host [fe80::fc86:50ff:fe3f:49af%5]:123 Dec 13 01:33:21.428473 ntpd[1940]: 13 Dec 01:33:21 ntpd[1940]: Listen normally on 9 cilium_host [fe80::fc86:50ff:fe3f:49af%5]:123 Dec 13 01:33:21.428473 ntpd[1940]: 13 Dec 01:33:21 ntpd[1940]: Listen normally on 10 cilium_vxlan [fe80::3495:83ff:fe5a:7e39%6]:123 Dec 13 01:33:21.428473 ntpd[1940]: 13 Dec 01:33:21 ntpd[1940]: Listen normally on 11 lxc_health [fe80::544a:11ff:fef9:d9a6%8]:123 Dec 13 01:33:21.428473 ntpd[1940]: 13 Dec 01:33:21 ntpd[1940]: Listen normally on 12 lxcc2908d7da0a6 [fe80::fc5a:d4ff:fea0:77ad%10]:123 Dec 13 01:33:21.428473 ntpd[1940]: 13 Dec 01:33:21 ntpd[1940]: Listen normally on 13 lxc16aee1addafa [fe80::54a0:9ff:feee:faa8%12]:123 Dec 13 01:33:21.427914 ntpd[1940]: Listen normally on 10 cilium_vxlan [fe80::3495:83ff:fe5a:7e39%6]:123 Dec 13 01:33:21.427959 ntpd[1940]: Listen normally on 11 lxc_health [fe80::544a:11ff:fef9:d9a6%8]:123 Dec 13 01:33:21.427996 ntpd[1940]: Listen normally on 12 lxcc2908d7da0a6 [fe80::fc5a:d4ff:fea0:77ad%10]:123 Dec 13 01:33:21.428036 ntpd[1940]: Listen normally on 13 lxc16aee1addafa [fe80::54a0:9ff:feee:faa8%12]:123 Dec 13 01:33:21.771273 sshd[4533]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:21.780121 systemd[1]: sshd@9-172.31.29.36:22-139.178.68.195:39650.service: Deactivated successfully. Dec 13 01:33:21.788139 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:33:21.790520 systemd-logind[1946]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:33:21.792898 systemd-logind[1946]: Removed session 10. Dec 13 01:33:23.154382 containerd[1957]: time="2024-12-13T01:33:23.152820582Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:33:23.154382 containerd[1957]: time="2024-12-13T01:33:23.152893054Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:33:23.154382 containerd[1957]: time="2024-12-13T01:33:23.152925080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:23.154382 containerd[1957]: time="2024-12-13T01:33:23.153041297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:23.201752 systemd[1]: run-containerd-runc-k8s.io-584bf8e3cd0d969cc3a0dcf72c37a5f4673f468441605a02d23681f091e31b19-runc.uz8zie.mount: Deactivated successfully. Dec 13 01:33:23.217966 systemd[1]: Started cri-containerd-584bf8e3cd0d969cc3a0dcf72c37a5f4673f468441605a02d23681f091e31b19.scope - libcontainer container 584bf8e3cd0d969cc3a0dcf72c37a5f4673f468441605a02d23681f091e31b19. Dec 13 01:33:23.271499 containerd[1957]: time="2024-12-13T01:33:23.271354965Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:33:23.272711 containerd[1957]: time="2024-12-13T01:33:23.272372040Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:33:23.272711 containerd[1957]: time="2024-12-13T01:33:23.272403469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:23.272711 containerd[1957]: time="2024-12-13T01:33:23.272537614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:33:23.321620 systemd[1]: Started cri-containerd-5f78e47f8f63b6a647cecf953c7b4504c5a23a960ee35a54fa4dbcb17a7e3c03.scope - libcontainer container 5f78e47f8f63b6a647cecf953c7b4504c5a23a960ee35a54fa4dbcb17a7e3c03. Dec 13 01:33:23.415980 containerd[1957]: time="2024-12-13T01:33:23.415815369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2l6ks,Uid:b2c17690-4f8b-4fd9-a6e4-751163da93da,Namespace:kube-system,Attempt:0,} returns sandbox id \"584bf8e3cd0d969cc3a0dcf72c37a5f4673f468441605a02d23681f091e31b19\"" Dec 13 01:33:23.445200 containerd[1957]: time="2024-12-13T01:33:23.445139020Z" level=info msg="CreateContainer within sandbox \"584bf8e3cd0d969cc3a0dcf72c37a5f4673f468441605a02d23681f091e31b19\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:33:23.516495 containerd[1957]: time="2024-12-13T01:33:23.516227970Z" level=info msg="CreateContainer within sandbox \"584bf8e3cd0d969cc3a0dcf72c37a5f4673f468441605a02d23681f091e31b19\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6d6956ee484ae4d459d0979ab0e8ade1f19cf728f957c202cce677c2ff6bf4a3\"" Dec 13 01:33:23.520475 containerd[1957]: time="2024-12-13T01:33:23.518182487Z" level=info msg="StartContainer for \"6d6956ee484ae4d459d0979ab0e8ade1f19cf728f957c202cce677c2ff6bf4a3\"" Dec 13 01:33:23.532953 containerd[1957]: time="2024-12-13T01:33:23.532912254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-px6z6,Uid:4e07e2b6-7463-46e7-b83f-d2250171da40,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f78e47f8f63b6a647cecf953c7b4504c5a23a960ee35a54fa4dbcb17a7e3c03\"" Dec 13 01:33:23.538902 containerd[1957]: time="2024-12-13T01:33:23.538858145Z" level=info msg="CreateContainer within sandbox \"5f78e47f8f63b6a647cecf953c7b4504c5a23a960ee35a54fa4dbcb17a7e3c03\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:33:23.572549 systemd[1]: Started cri-containerd-6d6956ee484ae4d459d0979ab0e8ade1f19cf728f957c202cce677c2ff6bf4a3.scope - libcontainer container 6d6956ee484ae4d459d0979ab0e8ade1f19cf728f957c202cce677c2ff6bf4a3. Dec 13 01:33:23.612655 containerd[1957]: time="2024-12-13T01:33:23.612579207Z" level=info msg="CreateContainer within sandbox \"5f78e47f8f63b6a647cecf953c7b4504c5a23a960ee35a54fa4dbcb17a7e3c03\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3686f665d7b28760a0b960356e699698c3dc581ed104f03514ae88c9df0125db\"" Dec 13 01:33:23.614825 containerd[1957]: time="2024-12-13T01:33:23.614775692Z" level=info msg="StartContainer for \"3686f665d7b28760a0b960356e699698c3dc581ed104f03514ae88c9df0125db\"" Dec 13 01:33:23.651181 containerd[1957]: time="2024-12-13T01:33:23.651129606Z" level=info msg="StartContainer for \"6d6956ee484ae4d459d0979ab0e8ade1f19cf728f957c202cce677c2ff6bf4a3\" returns successfully" Dec 13 01:33:23.670536 systemd[1]: Started cri-containerd-3686f665d7b28760a0b960356e699698c3dc581ed104f03514ae88c9df0125db.scope - libcontainer container 3686f665d7b28760a0b960356e699698c3dc581ed104f03514ae88c9df0125db. Dec 13 01:33:23.709649 containerd[1957]: time="2024-12-13T01:33:23.709599701Z" level=info msg="StartContainer for \"3686f665d7b28760a0b960356e699698c3dc581ed104f03514ae88c9df0125db\" returns successfully" Dec 13 01:33:24.291450 kubelet[3344]: I1213 01:33:24.290939 3344 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-2l6ks" podStartSLOduration=37.290888397 podStartE2EDuration="37.290888397s" podCreationTimestamp="2024-12-13 01:32:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:33:24.268513925 +0000 UTC m=+48.659824879" watchObservedRunningTime="2024-12-13 01:33:24.290888397 +0000 UTC m=+48.682199351" Dec 13 01:33:24.323185 kubelet[3344]: I1213 01:33:24.323143 3344 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-px6z6" podStartSLOduration=37.323089639 podStartE2EDuration="37.323089639s" podCreationTimestamp="2024-12-13 01:32:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:33:24.292500715 +0000 UTC m=+48.683811669" watchObservedRunningTime="2024-12-13 01:33:24.323089639 +0000 UTC m=+48.714400621" Dec 13 01:33:26.802857 systemd[1]: Started sshd@10-172.31.29.36:22-139.178.68.195:53870.service - OpenSSH per-connection server daemon (139.178.68.195:53870). Dec 13 01:33:27.041592 sshd[4723]: Accepted publickey for core from 139.178.68.195 port 53870 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:33:27.051729 sshd[4723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:27.062517 systemd-logind[1946]: New session 11 of user core. Dec 13 01:33:27.070387 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:33:27.503761 sshd[4723]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:27.510184 systemd-logind[1946]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:33:27.511498 systemd[1]: sshd@10-172.31.29.36:22-139.178.68.195:53870.service: Deactivated successfully. Dec 13 01:33:27.514738 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:33:27.518279 systemd-logind[1946]: Removed session 11. Dec 13 01:33:32.555756 systemd[1]: Started sshd@11-172.31.29.36:22-139.178.68.195:53878.service - OpenSSH per-connection server daemon (139.178.68.195:53878). Dec 13 01:33:32.739531 sshd[4737]: Accepted publickey for core from 139.178.68.195 port 53878 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:33:32.741807 sshd[4737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:32.749603 systemd-logind[1946]: New session 12 of user core. Dec 13 01:33:32.754505 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:33:32.983634 sshd[4737]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:32.990181 systemd[1]: sshd@11-172.31.29.36:22-139.178.68.195:53878.service: Deactivated successfully. Dec 13 01:33:32.995181 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:33:33.000371 systemd-logind[1946]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:33:33.003692 systemd-logind[1946]: Removed session 12. Dec 13 01:33:38.023760 systemd[1]: Started sshd@12-172.31.29.36:22-139.178.68.195:47786.service - OpenSSH per-connection server daemon (139.178.68.195:47786). Dec 13 01:33:38.186636 sshd[4753]: Accepted publickey for core from 139.178.68.195 port 47786 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:33:38.188267 sshd[4753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:38.194225 systemd-logind[1946]: New session 13 of user core. Dec 13 01:33:38.197518 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:33:38.427012 sshd[4753]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:38.443170 systemd[1]: sshd@12-172.31.29.36:22-139.178.68.195:47786.service: Deactivated successfully. Dec 13 01:33:38.453086 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:33:38.464057 systemd-logind[1946]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:33:38.465952 systemd-logind[1946]: Removed session 13. Dec 13 01:33:43.459632 systemd[1]: Started sshd@13-172.31.29.36:22-139.178.68.195:47798.service - OpenSSH per-connection server daemon (139.178.68.195:47798). Dec 13 01:33:43.633273 sshd[4767]: Accepted publickey for core from 139.178.68.195 port 47798 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:33:43.635256 sshd[4767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:43.643054 systemd-logind[1946]: New session 14 of user core. Dec 13 01:33:43.651010 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:33:43.920554 sshd[4767]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:43.924643 systemd-logind[1946]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:33:43.925675 systemd[1]: sshd@13-172.31.29.36:22-139.178.68.195:47798.service: Deactivated successfully. Dec 13 01:33:43.928278 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:33:43.929750 systemd-logind[1946]: Removed session 14. Dec 13 01:33:43.954228 systemd[1]: Started sshd@14-172.31.29.36:22-139.178.68.195:47806.service - OpenSSH per-connection server daemon (139.178.68.195:47806). Dec 13 01:33:44.112441 sshd[4781]: Accepted publickey for core from 139.178.68.195 port 47806 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:33:44.113761 sshd[4781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:44.125582 systemd-logind[1946]: New session 15 of user core. Dec 13 01:33:44.138616 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:33:44.432981 sshd[4781]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:44.448369 systemd[1]: sshd@14-172.31.29.36:22-139.178.68.195:47806.service: Deactivated successfully. Dec 13 01:33:44.448790 systemd-logind[1946]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:33:44.455050 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:33:44.469097 systemd-logind[1946]: Removed session 15. Dec 13 01:33:44.471585 systemd[1]: Started sshd@15-172.31.29.36:22-139.178.68.195:47818.service - OpenSSH per-connection server daemon (139.178.68.195:47818). Dec 13 01:33:44.672698 sshd[4791]: Accepted publickey for core from 139.178.68.195 port 47818 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:33:44.677634 sshd[4791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:44.699361 systemd-logind[1946]: New session 16 of user core. Dec 13 01:33:44.713585 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:33:45.012610 sshd[4791]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:45.030905 systemd[1]: sshd@15-172.31.29.36:22-139.178.68.195:47818.service: Deactivated successfully. Dec 13 01:33:45.036261 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:33:45.043502 systemd-logind[1946]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:33:45.051103 systemd-logind[1946]: Removed session 16. Dec 13 01:33:50.050456 systemd[1]: Started sshd@16-172.31.29.36:22-139.178.68.195:43096.service - OpenSSH per-connection server daemon (139.178.68.195:43096). Dec 13 01:33:50.221591 sshd[4807]: Accepted publickey for core from 139.178.68.195 port 43096 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:33:50.223651 sshd[4807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:50.229672 systemd-logind[1946]: New session 17 of user core. Dec 13 01:33:50.239552 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:33:50.461733 sshd[4807]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:50.466657 systemd[1]: sshd@16-172.31.29.36:22-139.178.68.195:43096.service: Deactivated successfully. Dec 13 01:33:50.469623 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:33:50.471769 systemd-logind[1946]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:33:50.474049 systemd-logind[1946]: Removed session 17. Dec 13 01:33:55.501614 systemd[1]: Started sshd@17-172.31.29.36:22-139.178.68.195:43100.service - OpenSSH per-connection server daemon (139.178.68.195:43100). Dec 13 01:33:55.671252 sshd[4820]: Accepted publickey for core from 139.178.68.195 port 43100 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:33:55.672991 sshd[4820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:55.682251 systemd-logind[1946]: New session 18 of user core. Dec 13 01:33:55.691526 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:33:55.985703 sshd[4820]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:55.995145 systemd[1]: sshd@17-172.31.29.36:22-139.178.68.195:43100.service: Deactivated successfully. Dec 13 01:33:55.997671 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:33:55.999809 systemd-logind[1946]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:33:56.001650 systemd-logind[1946]: Removed session 18. Dec 13 01:33:56.016401 systemd[1]: Started sshd@18-172.31.29.36:22-139.178.68.195:43110.service - OpenSSH per-connection server daemon (139.178.68.195:43110). Dec 13 01:33:56.206920 sshd[4833]: Accepted publickey for core from 139.178.68.195 port 43110 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:33:56.212190 sshd[4833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:56.232631 systemd-logind[1946]: New session 19 of user core. Dec 13 01:33:56.258282 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:33:57.021574 sshd[4833]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:57.035080 systemd[1]: sshd@18-172.31.29.36:22-139.178.68.195:43110.service: Deactivated successfully. Dec 13 01:33:57.044223 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:33:57.050492 systemd-logind[1946]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:33:57.070785 systemd[1]: Started sshd@19-172.31.29.36:22-139.178.68.195:46448.service - OpenSSH per-connection server daemon (139.178.68.195:46448). Dec 13 01:33:57.081337 systemd-logind[1946]: Removed session 19. Dec 13 01:33:57.306804 sshd[4845]: Accepted publickey for core from 139.178.68.195 port 46448 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:33:57.309722 sshd[4845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:57.317037 systemd-logind[1946]: New session 20 of user core. Dec 13 01:33:57.323559 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:33:59.674462 sshd[4845]: pam_unix(sshd:session): session closed for user core Dec 13 01:33:59.686275 systemd-logind[1946]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:33:59.686860 systemd[1]: sshd@19-172.31.29.36:22-139.178.68.195:46448.service: Deactivated successfully. Dec 13 01:33:59.691023 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:33:59.710647 systemd-logind[1946]: Removed session 20. Dec 13 01:33:59.718777 systemd[1]: Started sshd@20-172.31.29.36:22-139.178.68.195:46458.service - OpenSSH per-connection server daemon (139.178.68.195:46458). Dec 13 01:33:59.890324 sshd[4865]: Accepted publickey for core from 139.178.68.195 port 46458 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:33:59.898573 sshd[4865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:33:59.910261 systemd-logind[1946]: New session 21 of user core. Dec 13 01:33:59.916108 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:34:00.642799 sshd[4865]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:00.647775 systemd[1]: sshd@20-172.31.29.36:22-139.178.68.195:46458.service: Deactivated successfully. Dec 13 01:34:00.650914 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:34:00.652252 systemd-logind[1946]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:34:00.654677 systemd-logind[1946]: Removed session 21. Dec 13 01:34:00.674751 systemd[1]: Started sshd@21-172.31.29.36:22-139.178.68.195:46464.service - OpenSSH per-connection server daemon (139.178.68.195:46464). Dec 13 01:34:00.842695 sshd[4876]: Accepted publickey for core from 139.178.68.195 port 46464 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:34:00.844498 sshd[4876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:00.855144 systemd-logind[1946]: New session 22 of user core. Dec 13 01:34:00.865543 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:34:01.054229 sshd[4876]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:01.059685 systemd[1]: sshd@21-172.31.29.36:22-139.178.68.195:46464.service: Deactivated successfully. Dec 13 01:34:01.062841 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:34:01.063778 systemd-logind[1946]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:34:01.065082 systemd-logind[1946]: Removed session 22. Dec 13 01:34:06.094648 systemd[1]: Started sshd@22-172.31.29.36:22-139.178.68.195:32956.service - OpenSSH per-connection server daemon (139.178.68.195:32956). Dec 13 01:34:06.271551 sshd[4889]: Accepted publickey for core from 139.178.68.195 port 32956 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:34:06.274227 sshd[4889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:06.295331 systemd-logind[1946]: New session 23 of user core. Dec 13 01:34:06.304929 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:34:06.648095 sshd[4889]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:06.652123 systemd[1]: sshd@22-172.31.29.36:22-139.178.68.195:32956.service: Deactivated successfully. Dec 13 01:34:06.655812 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:34:06.658053 systemd-logind[1946]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:34:06.659679 systemd-logind[1946]: Removed session 23. Dec 13 01:34:11.706837 systemd[1]: Started sshd@23-172.31.29.36:22-139.178.68.195:32960.service - OpenSSH per-connection server daemon (139.178.68.195:32960). Dec 13 01:34:11.903176 sshd[4905]: Accepted publickey for core from 139.178.68.195 port 32960 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:34:11.905253 sshd[4905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:11.914371 systemd-logind[1946]: New session 24 of user core. Dec 13 01:34:11.921544 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 01:34:12.261704 sshd[4905]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:12.268826 systemd[1]: sshd@23-172.31.29.36:22-139.178.68.195:32960.service: Deactivated successfully. Dec 13 01:34:12.272064 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:34:12.275014 systemd-logind[1946]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:34:12.277131 systemd-logind[1946]: Removed session 24. Dec 13 01:34:17.295682 systemd[1]: Started sshd@24-172.31.29.36:22-139.178.68.195:50520.service - OpenSSH per-connection server daemon (139.178.68.195:50520). Dec 13 01:34:17.476529 sshd[4918]: Accepted publickey for core from 139.178.68.195 port 50520 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:34:17.479359 sshd[4918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:17.487808 systemd-logind[1946]: New session 25 of user core. Dec 13 01:34:17.496513 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 01:34:17.690645 sshd[4918]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:17.694681 systemd[1]: sshd@24-172.31.29.36:22-139.178.68.195:50520.service: Deactivated successfully. Dec 13 01:34:17.697114 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:34:17.699779 systemd-logind[1946]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:34:17.701069 systemd-logind[1946]: Removed session 25. Dec 13 01:34:22.726732 systemd[1]: Started sshd@25-172.31.29.36:22-139.178.68.195:50526.service - OpenSSH per-connection server daemon (139.178.68.195:50526). Dec 13 01:34:22.911574 sshd[4934]: Accepted publickey for core from 139.178.68.195 port 50526 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:34:22.918580 sshd[4934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:22.941461 systemd-logind[1946]: New session 26 of user core. Dec 13 01:34:22.964600 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 01:34:23.196191 sshd[4934]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:23.201768 systemd-logind[1946]: Session 26 logged out. Waiting for processes to exit. Dec 13 01:34:23.203081 systemd[1]: sshd@25-172.31.29.36:22-139.178.68.195:50526.service: Deactivated successfully. Dec 13 01:34:23.205858 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 01:34:23.207436 systemd-logind[1946]: Removed session 26. Dec 13 01:34:23.234704 systemd[1]: Started sshd@26-172.31.29.36:22-139.178.68.195:50540.service - OpenSSH per-connection server daemon (139.178.68.195:50540). Dec 13 01:34:23.406067 sshd[4947]: Accepted publickey for core from 139.178.68.195 port 50540 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:34:23.412202 sshd[4947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:23.425895 systemd-logind[1946]: New session 27 of user core. Dec 13 01:34:23.430497 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 01:34:25.070194 containerd[1957]: time="2024-12-13T01:34:25.070136937Z" level=info msg="StopContainer for \"9787a3312142f43f0711b6e2cbc723dad1887b913b3d6225fd34291e9d9286ba\" with timeout 30 (s)" Dec 13 01:34:25.076403 containerd[1957]: time="2024-12-13T01:34:25.076361652Z" level=info msg="Stop container \"9787a3312142f43f0711b6e2cbc723dad1887b913b3d6225fd34291e9d9286ba\" with signal terminated" Dec 13 01:34:25.116494 systemd[1]: cri-containerd-9787a3312142f43f0711b6e2cbc723dad1887b913b3d6225fd34291e9d9286ba.scope: Deactivated successfully. Dec 13 01:34:25.152097 containerd[1957]: time="2024-12-13T01:34:25.151542399Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:34:25.165795 containerd[1957]: time="2024-12-13T01:34:25.165645297Z" level=info msg="StopContainer for \"2c7f401fe1f748b5aa5ba93fcf458e3324b98a4f2174de0a02946c18b1947795\" with timeout 2 (s)" Dec 13 01:34:25.166995 containerd[1957]: time="2024-12-13T01:34:25.166658805Z" level=info msg="Stop container \"2c7f401fe1f748b5aa5ba93fcf458e3324b98a4f2174de0a02946c18b1947795\" with signal terminated" Dec 13 01:34:25.174954 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9787a3312142f43f0711b6e2cbc723dad1887b913b3d6225fd34291e9d9286ba-rootfs.mount: Deactivated successfully. Dec 13 01:34:25.190313 systemd-networkd[1809]: lxc_health: Link DOWN Dec 13 01:34:25.190326 systemd-networkd[1809]: lxc_health: Lost carrier Dec 13 01:34:25.219537 systemd[1]: cri-containerd-2c7f401fe1f748b5aa5ba93fcf458e3324b98a4f2174de0a02946c18b1947795.scope: Deactivated successfully. Dec 13 01:34:25.220351 systemd[1]: cri-containerd-2c7f401fe1f748b5aa5ba93fcf458e3324b98a4f2174de0a02946c18b1947795.scope: Consumed 9.114s CPU time. Dec 13 01:34:25.242417 containerd[1957]: time="2024-12-13T01:34:25.242311791Z" level=info msg="shim disconnected" id=9787a3312142f43f0711b6e2cbc723dad1887b913b3d6225fd34291e9d9286ba namespace=k8s.io Dec 13 01:34:25.242417 containerd[1957]: time="2024-12-13T01:34:25.242386655Z" level=warning msg="cleaning up after shim disconnected" id=9787a3312142f43f0711b6e2cbc723dad1887b913b3d6225fd34291e9d9286ba namespace=k8s.io Dec 13 01:34:25.242417 containerd[1957]: time="2024-12-13T01:34:25.242399006Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:34:25.286433 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c7f401fe1f748b5aa5ba93fcf458e3324b98a4f2174de0a02946c18b1947795-rootfs.mount: Deactivated successfully. Dec 13 01:34:25.301425 containerd[1957]: time="2024-12-13T01:34:25.300928848Z" level=info msg="shim disconnected" id=2c7f401fe1f748b5aa5ba93fcf458e3324b98a4f2174de0a02946c18b1947795 namespace=k8s.io Dec 13 01:34:25.301425 containerd[1957]: time="2024-12-13T01:34:25.301054363Z" level=warning msg="cleaning up after shim disconnected" id=2c7f401fe1f748b5aa5ba93fcf458e3324b98a4f2174de0a02946c18b1947795 namespace=k8s.io Dec 13 01:34:25.301425 containerd[1957]: time="2024-12-13T01:34:25.301099187Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:34:25.308445 containerd[1957]: time="2024-12-13T01:34:25.308063285Z" level=info msg="StopContainer for \"9787a3312142f43f0711b6e2cbc723dad1887b913b3d6225fd34291e9d9286ba\" returns successfully" Dec 13 01:34:25.309112 containerd[1957]: time="2024-12-13T01:34:25.309061708Z" level=info msg="StopPodSandbox for \"e9c4b6d4763b6f6a65271cf09d0c3a0f518f6df01d93895b3f0e2eaf69fd2b77\"" Dec 13 01:34:25.309230 containerd[1957]: time="2024-12-13T01:34:25.309125127Z" level=info msg="Container to stop \"9787a3312142f43f0711b6e2cbc723dad1887b913b3d6225fd34291e9d9286ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:34:25.315440 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e9c4b6d4763b6f6a65271cf09d0c3a0f518f6df01d93895b3f0e2eaf69fd2b77-shm.mount: Deactivated successfully. Dec 13 01:34:25.333057 systemd[1]: cri-containerd-e9c4b6d4763b6f6a65271cf09d0c3a0f518f6df01d93895b3f0e2eaf69fd2b77.scope: Deactivated successfully. Dec 13 01:34:25.353251 containerd[1957]: time="2024-12-13T01:34:25.352742891Z" level=info msg="StopContainer for \"2c7f401fe1f748b5aa5ba93fcf458e3324b98a4f2174de0a02946c18b1947795\" returns successfully" Dec 13 01:34:25.353706 containerd[1957]: time="2024-12-13T01:34:25.353663303Z" level=info msg="StopPodSandbox for \"c4c1bdcd99486f2a52bc163f9b82eaa77ee736e9e6777808f39c583d3707fad1\"" Dec 13 01:34:25.353834 containerd[1957]: time="2024-12-13T01:34:25.353723723Z" level=info msg="Container to stop \"2c7f401fe1f748b5aa5ba93fcf458e3324b98a4f2174de0a02946c18b1947795\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:34:25.353834 containerd[1957]: time="2024-12-13T01:34:25.353744139Z" level=info msg="Container to stop \"bfab08c37bea8356ba21a8da38494e96660d9f1c14b3a837c3d26f8e4d56f586\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:34:25.353834 containerd[1957]: time="2024-12-13T01:34:25.353763263Z" level=info msg="Container to stop \"2e7a611292a900db79fe22e9e00c0518694691a415345d74b2fe9f8c2f0f424f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:34:25.353834 containerd[1957]: time="2024-12-13T01:34:25.353778701Z" level=info msg="Container to stop \"b51649b98512fe8ed4572b3c4bd9650fdad29215a72efba0098be1c104362475\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:34:25.353834 containerd[1957]: time="2024-12-13T01:34:25.353793969Z" level=info msg="Container to stop \"58ff625eeeea4155ec5f1ab8408b4271eb3c2f631d28c04c8914a83877aa2455\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:34:25.359608 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c4c1bdcd99486f2a52bc163f9b82eaa77ee736e9e6777808f39c583d3707fad1-shm.mount: Deactivated successfully. Dec 13 01:34:25.374044 systemd[1]: cri-containerd-c4c1bdcd99486f2a52bc163f9b82eaa77ee736e9e6777808f39c583d3707fad1.scope: Deactivated successfully. Dec 13 01:34:25.506735 containerd[1957]: time="2024-12-13T01:34:25.506463764Z" level=info msg="shim disconnected" id=e9c4b6d4763b6f6a65271cf09d0c3a0f518f6df01d93895b3f0e2eaf69fd2b77 namespace=k8s.io Dec 13 01:34:25.506735 containerd[1957]: time="2024-12-13T01:34:25.506524951Z" level=warning msg="cleaning up after shim disconnected" id=e9c4b6d4763b6f6a65271cf09d0c3a0f518f6df01d93895b3f0e2eaf69fd2b77 namespace=k8s.io Dec 13 01:34:25.506735 containerd[1957]: time="2024-12-13T01:34:25.506536011Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:34:25.508821 containerd[1957]: time="2024-12-13T01:34:25.508257344Z" level=info msg="shim disconnected" id=c4c1bdcd99486f2a52bc163f9b82eaa77ee736e9e6777808f39c583d3707fad1 namespace=k8s.io Dec 13 01:34:25.508821 containerd[1957]: time="2024-12-13T01:34:25.508556528Z" level=warning msg="cleaning up after shim disconnected" id=c4c1bdcd99486f2a52bc163f9b82eaa77ee736e9e6777808f39c583d3707fad1 namespace=k8s.io Dec 13 01:34:25.508821 containerd[1957]: time="2024-12-13T01:34:25.508572785Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:34:25.555974 containerd[1957]: time="2024-12-13T01:34:25.555230942Z" level=info msg="TearDown network for sandbox \"c4c1bdcd99486f2a52bc163f9b82eaa77ee736e9e6777808f39c583d3707fad1\" successfully" Dec 13 01:34:25.555974 containerd[1957]: time="2024-12-13T01:34:25.555316912Z" level=info msg="StopPodSandbox for \"c4c1bdcd99486f2a52bc163f9b82eaa77ee736e9e6777808f39c583d3707fad1\" returns successfully" Dec 13 01:34:25.557447 containerd[1957]: time="2024-12-13T01:34:25.557406010Z" level=info msg="TearDown network for sandbox \"e9c4b6d4763b6f6a65271cf09d0c3a0f518f6df01d93895b3f0e2eaf69fd2b77\" successfully" Dec 13 01:34:25.557785 containerd[1957]: time="2024-12-13T01:34:25.557587116Z" level=info msg="StopPodSandbox for \"e9c4b6d4763b6f6a65271cf09d0c3a0f518f6df01d93895b3f0e2eaf69fd2b77\" returns successfully" Dec 13 01:34:25.618097 kubelet[3344]: I1213 01:34:25.617280 3344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e3103c01-c084-486e-82d5-eb4738245941-host-proc-sys-net\") pod \"e3103c01-c084-486e-82d5-eb4738245941\" (UID: \"e3103c01-c084-486e-82d5-eb4738245941\") " Dec 13 01:34:25.618097 kubelet[3344]: I1213 01:34:25.617661 3344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jckm\" (UniqueName: \"kubernetes.io/projected/e3103c01-c084-486e-82d5-eb4738245941-kube-api-access-6jckm\") pod \"e3103c01-c084-486e-82d5-eb4738245941\" (UID: \"e3103c01-c084-486e-82d5-eb4738245941\") " Dec 13 01:34:25.618097 kubelet[3344]: I1213 01:34:25.617698 3344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e3103c01-c084-486e-82d5-eb4738245941-cni-path\") pod \"e3103c01-c084-486e-82d5-eb4738245941\" (UID: \"e3103c01-c084-486e-82d5-eb4738245941\") " Dec 13 01:34:25.618097 kubelet[3344]: I1213 01:34:25.617730 3344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e3103c01-c084-486e-82d5-eb4738245941-hubble-tls\") pod \"e3103c01-c084-486e-82d5-eb4738245941\" (UID: \"e3103c01-c084-486e-82d5-eb4738245941\") " Dec 13 01:34:25.618097 kubelet[3344]: I1213 01:34:25.617758 3344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e3103c01-c084-486e-82d5-eb4738245941-cilium-run\") pod \"e3103c01-c084-486e-82d5-eb4738245941\" (UID: \"e3103c01-c084-486e-82d5-eb4738245941\") " Dec 13 01:34:25.618097 kubelet[3344]: I1213 01:34:25.617783 3344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e3103c01-c084-486e-82d5-eb4738245941-hostproc\") pod \"e3103c01-c084-486e-82d5-eb4738245941\" (UID: \"e3103c01-c084-486e-82d5-eb4738245941\") " Dec 13 01:34:25.622378 kubelet[3344]: I1213 01:34:25.617811 3344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e3103c01-c084-486e-82d5-eb4738245941-host-proc-sys-kernel\") pod \"e3103c01-c084-486e-82d5-eb4738245941\" (UID: \"e3103c01-c084-486e-82d5-eb4738245941\") " Dec 13 01:34:25.622378 kubelet[3344]: I1213 01:34:25.617947 3344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3103c01-c084-486e-82d5-eb4738245941-etc-cni-netd\") pod \"e3103c01-c084-486e-82d5-eb4738245941\" (UID: \"e3103c01-c084-486e-82d5-eb4738245941\") " Dec 13 01:34:25.622378 kubelet[3344]: I1213 01:34:25.617983 3344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e3103c01-c084-486e-82d5-eb4738245941-clustermesh-secrets\") pod \"e3103c01-c084-486e-82d5-eb4738245941\" (UID: \"e3103c01-c084-486e-82d5-eb4738245941\") " Dec 13 01:34:25.629627 kubelet[3344]: I1213 01:34:25.617360 3344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3103c01-c084-486e-82d5-eb4738245941-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e3103c01-c084-486e-82d5-eb4738245941" (UID: "e3103c01-c084-486e-82d5-eb4738245941"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:34:25.631249 kubelet[3344]: I1213 01:34:25.629867 3344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3103c01-c084-486e-82d5-eb4738245941-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e3103c01-c084-486e-82d5-eb4738245941" (UID: "e3103c01-c084-486e-82d5-eb4738245941"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:34:25.631514 kubelet[3344]: I1213 01:34:25.631143 3344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3103c01-c084-486e-82d5-eb4738245941-cni-path" (OuterVolumeSpecName: "cni-path") pod "e3103c01-c084-486e-82d5-eb4738245941" (UID: "e3103c01-c084-486e-82d5-eb4738245941"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:34:25.658590 kubelet[3344]: I1213 01:34:25.658159 3344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3103c01-c084-486e-82d5-eb4738245941-hostproc" (OuterVolumeSpecName: "hostproc") pod "e3103c01-c084-486e-82d5-eb4738245941" (UID: "e3103c01-c084-486e-82d5-eb4738245941"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:34:25.658590 kubelet[3344]: I1213 01:34:25.658239 3344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3103c01-c084-486e-82d5-eb4738245941-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e3103c01-c084-486e-82d5-eb4738245941" (UID: "e3103c01-c084-486e-82d5-eb4738245941"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:34:25.658590 kubelet[3344]: I1213 01:34:25.658265 3344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3103c01-c084-486e-82d5-eb4738245941-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e3103c01-c084-486e-82d5-eb4738245941" (UID: "e3103c01-c084-486e-82d5-eb4738245941"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:34:25.658590 kubelet[3344]: I1213 01:34:25.658339 3344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3103c01-c084-486e-82d5-eb4738245941-cilium-config-path\") pod \"e3103c01-c084-486e-82d5-eb4738245941\" (UID: \"e3103c01-c084-486e-82d5-eb4738245941\") " Dec 13 01:34:25.658590 kubelet[3344]: I1213 01:34:25.658377 3344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e3103c01-c084-486e-82d5-eb4738245941-cilium-cgroup\") pod \"e3103c01-c084-486e-82d5-eb4738245941\" (UID: \"e3103c01-c084-486e-82d5-eb4738245941\") " Dec 13 01:34:25.658933 kubelet[3344]: I1213 01:34:25.658407 3344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3103c01-c084-486e-82d5-eb4738245941-xtables-lock\") pod \"e3103c01-c084-486e-82d5-eb4738245941\" (UID: \"e3103c01-c084-486e-82d5-eb4738245941\") " Dec 13 01:34:25.658933 kubelet[3344]: I1213 01:34:25.658434 3344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e3103c01-c084-486e-82d5-eb4738245941-bpf-maps\") pod \"e3103c01-c084-486e-82d5-eb4738245941\" (UID: \"e3103c01-c084-486e-82d5-eb4738245941\") " Dec 13 01:34:25.658933 kubelet[3344]: I1213 01:34:25.658465 3344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3103c01-c084-486e-82d5-eb4738245941-lib-modules\") pod \"e3103c01-c084-486e-82d5-eb4738245941\" (UID: \"e3103c01-c084-486e-82d5-eb4738245941\") " Dec 13 01:34:25.672325 kubelet[3344]: I1213 01:34:25.669794 3344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3103c01-c084-486e-82d5-eb4738245941-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e3103c01-c084-486e-82d5-eb4738245941" (UID: "e3103c01-c084-486e-82d5-eb4738245941"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:34:25.672325 kubelet[3344]: I1213 01:34:25.669863 3344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3103c01-c084-486e-82d5-eb4738245941-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e3103c01-c084-486e-82d5-eb4738245941" (UID: "e3103c01-c084-486e-82d5-eb4738245941"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:34:25.678638 kubelet[3344]: I1213 01:34:25.678581 3344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3103c01-c084-486e-82d5-eb4738245941-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e3103c01-c084-486e-82d5-eb4738245941" (UID: "e3103c01-c084-486e-82d5-eb4738245941"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:34:25.678855 kubelet[3344]: I1213 01:34:25.678730 3344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3103c01-c084-486e-82d5-eb4738245941-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e3103c01-c084-486e-82d5-eb4738245941" (UID: "e3103c01-c084-486e-82d5-eb4738245941"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:34:25.681314 kubelet[3344]: I1213 01:34:25.679087 3344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3103c01-c084-486e-82d5-eb4738245941-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e3103c01-c084-486e-82d5-eb4738245941" (UID: "e3103c01-c084-486e-82d5-eb4738245941"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:34:25.681314 kubelet[3344]: I1213 01:34:25.679178 3344 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e3103c01-c084-486e-82d5-eb4738245941-cni-path\") on node \"ip-172-31-29-36\" DevicePath \"\"" Dec 13 01:34:25.681314 kubelet[3344]: I1213 01:34:25.680623 3344 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e3103c01-c084-486e-82d5-eb4738245941-cilium-run\") on node \"ip-172-31-29-36\" DevicePath \"\"" Dec 13 01:34:25.681314 kubelet[3344]: I1213 01:34:25.680660 3344 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e3103c01-c084-486e-82d5-eb4738245941-hostproc\") on node \"ip-172-31-29-36\" DevicePath \"\"" Dec 13 01:34:25.681314 kubelet[3344]: I1213 01:34:25.680679 3344 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3103c01-c084-486e-82d5-eb4738245941-etc-cni-netd\") on node \"ip-172-31-29-36\" DevicePath \"\"" Dec 13 01:34:25.682045 kubelet[3344]: I1213 01:34:25.682015 3344 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e3103c01-c084-486e-82d5-eb4738245941-host-proc-sys-kernel\") on node \"ip-172-31-29-36\" DevicePath \"\"" Dec 13 01:34:25.682045 kubelet[3344]: I1213 01:34:25.682051 3344 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e3103c01-c084-486e-82d5-eb4738245941-host-proc-sys-net\") on node \"ip-172-31-29-36\" DevicePath \"\"" Dec 13 01:34:25.692308 kubelet[3344]: I1213 01:34:25.692236 3344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3103c01-c084-486e-82d5-eb4738245941-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e3103c01-c084-486e-82d5-eb4738245941" (UID: "e3103c01-c084-486e-82d5-eb4738245941"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:34:25.695885 kubelet[3344]: I1213 01:34:25.695838 3344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3103c01-c084-486e-82d5-eb4738245941-kube-api-access-6jckm" (OuterVolumeSpecName: "kube-api-access-6jckm") pod "e3103c01-c084-486e-82d5-eb4738245941" (UID: "e3103c01-c084-486e-82d5-eb4738245941"). InnerVolumeSpecName "kube-api-access-6jckm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:34:25.696195 kubelet[3344]: I1213 01:34:25.695938 3344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3103c01-c084-486e-82d5-eb4738245941-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e3103c01-c084-486e-82d5-eb4738245941" (UID: "e3103c01-c084-486e-82d5-eb4738245941"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:34:25.783964 kubelet[3344]: I1213 01:34:25.782469 3344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a5719724-ea25-44fb-b01a-887e072b33c9-cilium-config-path\") pod \"a5719724-ea25-44fb-b01a-887e072b33c9\" (UID: \"a5719724-ea25-44fb-b01a-887e072b33c9\") " Dec 13 01:34:25.783964 kubelet[3344]: I1213 01:34:25.783399 3344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65hx7\" (UniqueName: \"kubernetes.io/projected/a5719724-ea25-44fb-b01a-887e072b33c9-kube-api-access-65hx7\") pod \"a5719724-ea25-44fb-b01a-887e072b33c9\" (UID: \"a5719724-ea25-44fb-b01a-887e072b33c9\") " Dec 13 01:34:25.783964 kubelet[3344]: I1213 01:34:25.783474 3344 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e3103c01-c084-486e-82d5-eb4738245941-hubble-tls\") on node \"ip-172-31-29-36\" DevicePath \"\"" Dec 13 01:34:25.783964 kubelet[3344]: I1213 01:34:25.783847 3344 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e3103c01-c084-486e-82d5-eb4738245941-clustermesh-secrets\") on node \"ip-172-31-29-36\" DevicePath \"\"" Dec 13 01:34:25.783964 kubelet[3344]: I1213 01:34:25.783871 3344 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3103c01-c084-486e-82d5-eb4738245941-cilium-config-path\") on node \"ip-172-31-29-36\" DevicePath \"\"" Dec 13 01:34:25.783964 kubelet[3344]: I1213 01:34:25.783888 3344 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e3103c01-c084-486e-82d5-eb4738245941-cilium-cgroup\") on node \"ip-172-31-29-36\" DevicePath \"\"" Dec 13 01:34:25.783964 kubelet[3344]: I1213 01:34:25.783901 3344 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3103c01-c084-486e-82d5-eb4738245941-xtables-lock\") on node \"ip-172-31-29-36\" DevicePath \"\"" Dec 13 01:34:25.784620 kubelet[3344]: I1213 01:34:25.783914 3344 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e3103c01-c084-486e-82d5-eb4738245941-bpf-maps\") on node \"ip-172-31-29-36\" DevicePath \"\"" Dec 13 01:34:25.784620 kubelet[3344]: I1213 01:34:25.783929 3344 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3103c01-c084-486e-82d5-eb4738245941-lib-modules\") on node \"ip-172-31-29-36\" DevicePath \"\"" Dec 13 01:34:25.784620 kubelet[3344]: I1213 01:34:25.783944 3344 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-6jckm\" (UniqueName: \"kubernetes.io/projected/e3103c01-c084-486e-82d5-eb4738245941-kube-api-access-6jckm\") on node \"ip-172-31-29-36\" DevicePath \"\"" Dec 13 01:34:25.788185 kubelet[3344]: I1213 01:34:25.788136 3344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5719724-ea25-44fb-b01a-887e072b33c9-kube-api-access-65hx7" (OuterVolumeSpecName: "kube-api-access-65hx7") pod "a5719724-ea25-44fb-b01a-887e072b33c9" (UID: "a5719724-ea25-44fb-b01a-887e072b33c9"). InnerVolumeSpecName "kube-api-access-65hx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:34:25.789145 kubelet[3344]: I1213 01:34:25.789114 3344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5719724-ea25-44fb-b01a-887e072b33c9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a5719724-ea25-44fb-b01a-887e072b33c9" (UID: "a5719724-ea25-44fb-b01a-887e072b33c9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:34:25.894522 kubelet[3344]: I1213 01:34:25.886529 3344 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-65hx7\" (UniqueName: \"kubernetes.io/projected/a5719724-ea25-44fb-b01a-887e072b33c9-kube-api-access-65hx7\") on node \"ip-172-31-29-36\" DevicePath \"\"" Dec 13 01:34:25.894522 kubelet[3344]: I1213 01:34:25.886586 3344 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a5719724-ea25-44fb-b01a-887e072b33c9-cilium-config-path\") on node \"ip-172-31-29-36\" DevicePath \"\"" Dec 13 01:34:25.902949 systemd[1]: Removed slice kubepods-besteffort-poda5719724_ea25_44fb_b01a_887e072b33c9.slice - libcontainer container kubepods-besteffort-poda5719724_ea25_44fb_b01a_887e072b33c9.slice. Dec 13 01:34:25.911922 systemd[1]: Removed slice kubepods-burstable-pode3103c01_c084_486e_82d5_eb4738245941.slice - libcontainer container kubepods-burstable-pode3103c01_c084_486e_82d5_eb4738245941.slice. Dec 13 01:34:25.912311 systemd[1]: kubepods-burstable-pode3103c01_c084_486e_82d5_eb4738245941.slice: Consumed 9.211s CPU time. Dec 13 01:34:26.115684 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4c1bdcd99486f2a52bc163f9b82eaa77ee736e9e6777808f39c583d3707fad1-rootfs.mount: Deactivated successfully. Dec 13 01:34:26.116760 systemd[1]: var-lib-kubelet-pods-e3103c01\x2dc084\x2d486e\x2d82d5\x2deb4738245941-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 01:34:26.116883 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9c4b6d4763b6f6a65271cf09d0c3a0f518f6df01d93895b3f0e2eaf69fd2b77-rootfs.mount: Deactivated successfully. Dec 13 01:34:26.117038 systemd[1]: var-lib-kubelet-pods-a5719724\x2dea25\x2d44fb\x2db01a\x2d887e072b33c9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d65hx7.mount: Deactivated successfully. Dec 13 01:34:26.117129 systemd[1]: var-lib-kubelet-pods-e3103c01\x2dc084\x2d486e\x2d82d5\x2deb4738245941-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6jckm.mount: Deactivated successfully. Dec 13 01:34:26.117212 systemd[1]: var-lib-kubelet-pods-e3103c01\x2dc084\x2d486e\x2d82d5\x2deb4738245941-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 01:34:26.179995 kubelet[3344]: E1213 01:34:26.179868 3344 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:34:26.510417 kubelet[3344]: I1213 01:34:26.509096 3344 scope.go:117] "RemoveContainer" containerID="9787a3312142f43f0711b6e2cbc723dad1887b913b3d6225fd34291e9d9286ba" Dec 13 01:34:26.515037 containerd[1957]: time="2024-12-13T01:34:26.514252611Z" level=info msg="RemoveContainer for \"9787a3312142f43f0711b6e2cbc723dad1887b913b3d6225fd34291e9d9286ba\"" Dec 13 01:34:26.565351 containerd[1957]: time="2024-12-13T01:34:26.564949096Z" level=info msg="RemoveContainer for \"9787a3312142f43f0711b6e2cbc723dad1887b913b3d6225fd34291e9d9286ba\" returns successfully" Dec 13 01:34:26.566508 kubelet[3344]: I1213 01:34:26.566416 3344 scope.go:117] "RemoveContainer" containerID="2c7f401fe1f748b5aa5ba93fcf458e3324b98a4f2174de0a02946c18b1947795" Dec 13 01:34:26.572884 containerd[1957]: time="2024-12-13T01:34:26.571838370Z" level=info msg="RemoveContainer for \"2c7f401fe1f748b5aa5ba93fcf458e3324b98a4f2174de0a02946c18b1947795\"" Dec 13 01:34:26.583698 containerd[1957]: time="2024-12-13T01:34:26.583651424Z" level=info msg="RemoveContainer for \"2c7f401fe1f748b5aa5ba93fcf458e3324b98a4f2174de0a02946c18b1947795\" returns successfully" Dec 13 01:34:26.585805 kubelet[3344]: I1213 01:34:26.585772 3344 scope.go:117] "RemoveContainer" containerID="2e7a611292a900db79fe22e9e00c0518694691a415345d74b2fe9f8c2f0f424f" Dec 13 01:34:26.589016 containerd[1957]: time="2024-12-13T01:34:26.588444569Z" level=info msg="RemoveContainer for \"2e7a611292a900db79fe22e9e00c0518694691a415345d74b2fe9f8c2f0f424f\"" Dec 13 01:34:26.598782 containerd[1957]: time="2024-12-13T01:34:26.598733024Z" level=info msg="RemoveContainer for \"2e7a611292a900db79fe22e9e00c0518694691a415345d74b2fe9f8c2f0f424f\" returns successfully" Dec 13 01:34:26.599781 kubelet[3344]: I1213 01:34:26.599438 3344 scope.go:117] "RemoveContainer" containerID="bfab08c37bea8356ba21a8da38494e96660d9f1c14b3a837c3d26f8e4d56f586" Dec 13 01:34:26.603465 containerd[1957]: time="2024-12-13T01:34:26.602282609Z" level=info msg="RemoveContainer for \"bfab08c37bea8356ba21a8da38494e96660d9f1c14b3a837c3d26f8e4d56f586\"" Dec 13 01:34:26.610227 containerd[1957]: time="2024-12-13T01:34:26.610112313Z" level=info msg="RemoveContainer for \"bfab08c37bea8356ba21a8da38494e96660d9f1c14b3a837c3d26f8e4d56f586\" returns successfully" Dec 13 01:34:26.611336 kubelet[3344]: I1213 01:34:26.610858 3344 scope.go:117] "RemoveContainer" containerID="b51649b98512fe8ed4572b3c4bd9650fdad29215a72efba0098be1c104362475" Dec 13 01:34:26.613427 containerd[1957]: time="2024-12-13T01:34:26.613392675Z" level=info msg="RemoveContainer for \"b51649b98512fe8ed4572b3c4bd9650fdad29215a72efba0098be1c104362475\"" Dec 13 01:34:26.621509 containerd[1957]: time="2024-12-13T01:34:26.621262688Z" level=info msg="RemoveContainer for \"b51649b98512fe8ed4572b3c4bd9650fdad29215a72efba0098be1c104362475\" returns successfully" Dec 13 01:34:26.622345 kubelet[3344]: I1213 01:34:26.621939 3344 scope.go:117] "RemoveContainer" containerID="58ff625eeeea4155ec5f1ab8408b4271eb3c2f631d28c04c8914a83877aa2455" Dec 13 01:34:26.631333 containerd[1957]: time="2024-12-13T01:34:26.626695580Z" level=info msg="RemoveContainer for \"58ff625eeeea4155ec5f1ab8408b4271eb3c2f631d28c04c8914a83877aa2455\"" Dec 13 01:34:26.647012 containerd[1957]: time="2024-12-13T01:34:26.646969539Z" level=info msg="RemoveContainer for \"58ff625eeeea4155ec5f1ab8408b4271eb3c2f631d28c04c8914a83877aa2455\" returns successfully" Dec 13 01:34:26.925955 sshd[4947]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:26.932537 systemd[1]: sshd@26-172.31.29.36:22-139.178.68.195:50540.service: Deactivated successfully. Dec 13 01:34:26.935552 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 01:34:26.937527 systemd-logind[1946]: Session 27 logged out. Waiting for processes to exit. Dec 13 01:34:26.939309 systemd-logind[1946]: Removed session 27. Dec 13 01:34:26.965213 systemd[1]: Started sshd@27-172.31.29.36:22-139.178.68.195:52160.service - OpenSSH per-connection server daemon (139.178.68.195:52160). Dec 13 01:34:27.127363 sshd[5111]: Accepted publickey for core from 139.178.68.195 port 52160 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:34:27.129268 sshd[5111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:27.134359 systemd-logind[1946]: New session 28 of user core. Dec 13 01:34:27.139505 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 13 01:34:27.427072 ntpd[1940]: Deleting interface #11 lxc_health, fe80::544a:11ff:fef9:d9a6%8#123, interface stats: received=0, sent=0, dropped=0, active_time=66 secs Dec 13 01:34:27.427583 ntpd[1940]: 13 Dec 01:34:27 ntpd[1940]: Deleting interface #11 lxc_health, fe80::544a:11ff:fef9:d9a6%8#123, interface stats: received=0, sent=0, dropped=0, active_time=66 secs Dec 13 01:34:27.859326 kubelet[3344]: E1213 01:34:27.857808 3344 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-px6z6" podUID="4e07e2b6-7463-46e7-b83f-d2250171da40" Dec 13 01:34:27.863006 kubelet[3344]: I1213 01:34:27.862275 3344 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a5719724-ea25-44fb-b01a-887e072b33c9" path="/var/lib/kubelet/pods/a5719724-ea25-44fb-b01a-887e072b33c9/volumes" Dec 13 01:34:27.906363 sshd[5111]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:27.909934 kubelet[3344]: I1213 01:34:27.909898 3344 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e3103c01-c084-486e-82d5-eb4738245941" path="/var/lib/kubelet/pods/e3103c01-c084-486e-82d5-eb4738245941/volumes" Dec 13 01:34:27.924989 systemd[1]: sshd@27-172.31.29.36:22-139.178.68.195:52160.service: Deactivated successfully. Dec 13 01:34:27.939801 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 01:34:27.962677 kubelet[3344]: I1213 01:34:27.962619 3344 topology_manager.go:215] "Topology Admit Handler" podUID="8d8d8843-6200-44d3-887d-25d93eb16375" podNamespace="kube-system" podName="cilium-7tcr5" Dec 13 01:34:27.979260 systemd-logind[1946]: Session 28 logged out. Waiting for processes to exit. Dec 13 01:34:27.986812 systemd[1]: Started sshd@28-172.31.29.36:22-139.178.68.195:52172.service - OpenSSH per-connection server daemon (139.178.68.195:52172). Dec 13 01:34:27.991542 systemd-logind[1946]: Removed session 28. Dec 13 01:34:28.006477 kubelet[3344]: E1213 01:34:28.006437 3344 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a5719724-ea25-44fb-b01a-887e072b33c9" containerName="cilium-operator" Dec 13 01:34:28.006675 kubelet[3344]: E1213 01:34:28.006494 3344 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e3103c01-c084-486e-82d5-eb4738245941" containerName="mount-bpf-fs" Dec 13 01:34:28.006675 kubelet[3344]: E1213 01:34:28.006506 3344 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e3103c01-c084-486e-82d5-eb4738245941" containerName="clean-cilium-state" Dec 13 01:34:28.006675 kubelet[3344]: E1213 01:34:28.006516 3344 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e3103c01-c084-486e-82d5-eb4738245941" containerName="mount-cgroup" Dec 13 01:34:28.006675 kubelet[3344]: E1213 01:34:28.006525 3344 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e3103c01-c084-486e-82d5-eb4738245941" containerName="apply-sysctl-overwrites" Dec 13 01:34:28.006675 kubelet[3344]: E1213 01:34:28.006536 3344 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e3103c01-c084-486e-82d5-eb4738245941" containerName="cilium-agent" Dec 13 01:34:28.010101 kubelet[3344]: I1213 01:34:28.010059 3344 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5719724-ea25-44fb-b01a-887e072b33c9" containerName="cilium-operator" Dec 13 01:34:28.010222 kubelet[3344]: I1213 01:34:28.010108 3344 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3103c01-c084-486e-82d5-eb4738245941" containerName="cilium-agent" Dec 13 01:34:28.041203 systemd[1]: Created slice kubepods-burstable-pod8d8d8843_6200_44d3_887d_25d93eb16375.slice - libcontainer container kubepods-burstable-pod8d8d8843_6200_44d3_887d_25d93eb16375.slice. Dec 13 01:34:28.201503 sshd[5125]: Accepted publickey for core from 139.178.68.195 port 52172 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:34:28.203916 sshd[5125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:28.207941 kubelet[3344]: I1213 01:34:28.207045 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8d8d8843-6200-44d3-887d-25d93eb16375-cilium-ipsec-secrets\") pod \"cilium-7tcr5\" (UID: \"8d8d8843-6200-44d3-887d-25d93eb16375\") " pod="kube-system/cilium-7tcr5" Dec 13 01:34:28.207941 kubelet[3344]: I1213 01:34:28.207110 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8d8d8843-6200-44d3-887d-25d93eb16375-host-proc-sys-kernel\") pod \"cilium-7tcr5\" (UID: \"8d8d8843-6200-44d3-887d-25d93eb16375\") " pod="kube-system/cilium-7tcr5" Dec 13 01:34:28.207941 kubelet[3344]: I1213 01:34:28.207143 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d8d8843-6200-44d3-887d-25d93eb16375-lib-modules\") pod \"cilium-7tcr5\" (UID: \"8d8d8843-6200-44d3-887d-25d93eb16375\") " pod="kube-system/cilium-7tcr5" Dec 13 01:34:28.207941 kubelet[3344]: I1213 01:34:28.207170 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d8d8843-6200-44d3-887d-25d93eb16375-xtables-lock\") pod \"cilium-7tcr5\" (UID: \"8d8d8843-6200-44d3-887d-25d93eb16375\") " pod="kube-system/cilium-7tcr5" Dec 13 01:34:28.207941 kubelet[3344]: I1213 01:34:28.207202 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4ww4\" (UniqueName: \"kubernetes.io/projected/8d8d8843-6200-44d3-887d-25d93eb16375-kube-api-access-p4ww4\") pod \"cilium-7tcr5\" (UID: \"8d8d8843-6200-44d3-887d-25d93eb16375\") " pod="kube-system/cilium-7tcr5" Dec 13 01:34:28.208244 kubelet[3344]: I1213 01:34:28.207232 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8d8d8843-6200-44d3-887d-25d93eb16375-cilium-config-path\") pod \"cilium-7tcr5\" (UID: \"8d8d8843-6200-44d3-887d-25d93eb16375\") " pod="kube-system/cilium-7tcr5" Dec 13 01:34:28.208244 kubelet[3344]: I1213 01:34:28.207264 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8d8d8843-6200-44d3-887d-25d93eb16375-host-proc-sys-net\") pod \"cilium-7tcr5\" (UID: \"8d8d8843-6200-44d3-887d-25d93eb16375\") " pod="kube-system/cilium-7tcr5" Dec 13 01:34:28.208244 kubelet[3344]: I1213 01:34:28.207316 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8d8d8843-6200-44d3-887d-25d93eb16375-cilium-cgroup\") pod \"cilium-7tcr5\" (UID: \"8d8d8843-6200-44d3-887d-25d93eb16375\") " pod="kube-system/cilium-7tcr5" Dec 13 01:34:28.208244 kubelet[3344]: I1213 01:34:28.207345 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8d8d8843-6200-44d3-887d-25d93eb16375-cni-path\") pod \"cilium-7tcr5\" (UID: \"8d8d8843-6200-44d3-887d-25d93eb16375\") " pod="kube-system/cilium-7tcr5" Dec 13 01:34:28.208244 kubelet[3344]: I1213 01:34:28.207378 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8d8d8843-6200-44d3-887d-25d93eb16375-cilium-run\") pod \"cilium-7tcr5\" (UID: \"8d8d8843-6200-44d3-887d-25d93eb16375\") " pod="kube-system/cilium-7tcr5" Dec 13 01:34:28.208244 kubelet[3344]: I1213 01:34:28.207410 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8d8d8843-6200-44d3-887d-25d93eb16375-bpf-maps\") pod \"cilium-7tcr5\" (UID: \"8d8d8843-6200-44d3-887d-25d93eb16375\") " pod="kube-system/cilium-7tcr5" Dec 13 01:34:28.208519 kubelet[3344]: I1213 01:34:28.207439 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8d8d8843-6200-44d3-887d-25d93eb16375-etc-cni-netd\") pod \"cilium-7tcr5\" (UID: \"8d8d8843-6200-44d3-887d-25d93eb16375\") " pod="kube-system/cilium-7tcr5" Dec 13 01:34:28.208519 kubelet[3344]: I1213 01:34:28.207470 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8d8d8843-6200-44d3-887d-25d93eb16375-hubble-tls\") pod \"cilium-7tcr5\" (UID: \"8d8d8843-6200-44d3-887d-25d93eb16375\") " pod="kube-system/cilium-7tcr5" Dec 13 01:34:28.208519 kubelet[3344]: I1213 01:34:28.207503 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8d8d8843-6200-44d3-887d-25d93eb16375-clustermesh-secrets\") pod \"cilium-7tcr5\" (UID: \"8d8d8843-6200-44d3-887d-25d93eb16375\") " pod="kube-system/cilium-7tcr5" Dec 13 01:34:28.208519 kubelet[3344]: I1213 01:34:28.207534 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8d8d8843-6200-44d3-887d-25d93eb16375-hostproc\") pod \"cilium-7tcr5\" (UID: \"8d8d8843-6200-44d3-887d-25d93eb16375\") " pod="kube-system/cilium-7tcr5" Dec 13 01:34:28.218149 systemd-logind[1946]: New session 29 of user core. Dec 13 01:34:28.224664 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 13 01:34:28.383427 sshd[5125]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:28.389862 systemd[1]: sshd@28-172.31.29.36:22-139.178.68.195:52172.service: Deactivated successfully. Dec 13 01:34:28.392887 systemd[1]: session-29.scope: Deactivated successfully. Dec 13 01:34:28.394930 systemd-logind[1946]: Session 29 logged out. Waiting for processes to exit. Dec 13 01:34:28.397091 systemd-logind[1946]: Removed session 29. Dec 13 01:34:28.419709 systemd[1]: Started sshd@29-172.31.29.36:22-139.178.68.195:52186.service - OpenSSH per-connection server daemon (139.178.68.195:52186). Dec 13 01:34:28.589317 sshd[5137]: Accepted publickey for core from 139.178.68.195 port 52186 ssh2: RSA SHA256:jemIVC9coYQS9L4PsiWm2Ug3GTTFAGg9T5Q5jNKvYxg Dec 13 01:34:28.592421 kubelet[3344]: I1213 01:34:28.592397 3344 setters.go:568] "Node became not ready" node="ip-172-31-29-36" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T01:34:28Z","lastTransitionTime":"2024-12-13T01:34:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 01:34:28.594092 sshd[5137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:34:28.609900 systemd-logind[1946]: New session 30 of user core. Dec 13 01:34:28.614604 systemd[1]: Started session-30.scope - Session 30 of User core. Dec 13 01:34:28.651768 containerd[1957]: time="2024-12-13T01:34:28.651722325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7tcr5,Uid:8d8d8843-6200-44d3-887d-25d93eb16375,Namespace:kube-system,Attempt:0,}" Dec 13 01:34:28.756764 containerd[1957]: time="2024-12-13T01:34:28.756649377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:34:28.757112 containerd[1957]: time="2024-12-13T01:34:28.756951002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:34:28.757928 containerd[1957]: time="2024-12-13T01:34:28.757661899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:34:28.757928 containerd[1957]: time="2024-12-13T01:34:28.757843725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:34:28.786182 systemd[1]: Started cri-containerd-7e79d3dc911c753a3c21cd935b58e24cb832d01ff9ae39d310b79c3f64acbef9.scope - libcontainer container 7e79d3dc911c753a3c21cd935b58e24cb832d01ff9ae39d310b79c3f64acbef9. Dec 13 01:34:28.845972 containerd[1957]: time="2024-12-13T01:34:28.845855473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7tcr5,Uid:8d8d8843-6200-44d3-887d-25d93eb16375,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e79d3dc911c753a3c21cd935b58e24cb832d01ff9ae39d310b79c3f64acbef9\"" Dec 13 01:34:28.852156 containerd[1957]: time="2024-12-13T01:34:28.852033452Z" level=info msg="CreateContainer within sandbox \"7e79d3dc911c753a3c21cd935b58e24cb832d01ff9ae39d310b79c3f64acbef9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:34:28.916336 containerd[1957]: time="2024-12-13T01:34:28.916239091Z" level=info msg="CreateContainer within sandbox \"7e79d3dc911c753a3c21cd935b58e24cb832d01ff9ae39d310b79c3f64acbef9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3620134b02b0e5460f67f7491589b881027b166f22f10dfecb23a2ae73aa639b\"" Dec 13 01:34:28.918531 containerd[1957]: time="2024-12-13T01:34:28.917051390Z" level=info msg="StartContainer for \"3620134b02b0e5460f67f7491589b881027b166f22f10dfecb23a2ae73aa639b\"" Dec 13 01:34:28.953562 systemd[1]: Started cri-containerd-3620134b02b0e5460f67f7491589b881027b166f22f10dfecb23a2ae73aa639b.scope - libcontainer container 3620134b02b0e5460f67f7491589b881027b166f22f10dfecb23a2ae73aa639b. Dec 13 01:34:28.988207 containerd[1957]: time="2024-12-13T01:34:28.988058251Z" level=info msg="StartContainer for \"3620134b02b0e5460f67f7491589b881027b166f22f10dfecb23a2ae73aa639b\" returns successfully" Dec 13 01:34:29.008618 systemd[1]: cri-containerd-3620134b02b0e5460f67f7491589b881027b166f22f10dfecb23a2ae73aa639b.scope: Deactivated successfully. Dec 13 01:34:29.070641 containerd[1957]: time="2024-12-13T01:34:29.070555100Z" level=info msg="shim disconnected" id=3620134b02b0e5460f67f7491589b881027b166f22f10dfecb23a2ae73aa639b namespace=k8s.io Dec 13 01:34:29.070641 containerd[1957]: time="2024-12-13T01:34:29.070610491Z" level=warning msg="cleaning up after shim disconnected" id=3620134b02b0e5460f67f7491589b881027b166f22f10dfecb23a2ae73aa639b namespace=k8s.io Dec 13 01:34:29.070641 containerd[1957]: time="2024-12-13T01:34:29.070623799Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:34:29.532449 containerd[1957]: time="2024-12-13T01:34:29.532260510Z" level=info msg="CreateContainer within sandbox \"7e79d3dc911c753a3c21cd935b58e24cb832d01ff9ae39d310b79c3f64acbef9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:34:29.559548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2937085423.mount: Deactivated successfully. Dec 13 01:34:29.562408 containerd[1957]: time="2024-12-13T01:34:29.562361114Z" level=info msg="CreateContainer within sandbox \"7e79d3dc911c753a3c21cd935b58e24cb832d01ff9ae39d310b79c3f64acbef9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e11d7addd930721345dac54f8564062363657e74e17d82582acc70486834711f\"" Dec 13 01:34:29.563014 containerd[1957]: time="2024-12-13T01:34:29.562987853Z" level=info msg="StartContainer for \"e11d7addd930721345dac54f8564062363657e74e17d82582acc70486834711f\"" Dec 13 01:34:29.603508 systemd[1]: Started cri-containerd-e11d7addd930721345dac54f8564062363657e74e17d82582acc70486834711f.scope - libcontainer container e11d7addd930721345dac54f8564062363657e74e17d82582acc70486834711f. Dec 13 01:34:29.645327 containerd[1957]: time="2024-12-13T01:34:29.644855702Z" level=info msg="StartContainer for \"e11d7addd930721345dac54f8564062363657e74e17d82582acc70486834711f\" returns successfully" Dec 13 01:34:29.653181 systemd[1]: cri-containerd-e11d7addd930721345dac54f8564062363657e74e17d82582acc70486834711f.scope: Deactivated successfully. Dec 13 01:34:29.747770 containerd[1957]: time="2024-12-13T01:34:29.747448838Z" level=info msg="shim disconnected" id=e11d7addd930721345dac54f8564062363657e74e17d82582acc70486834711f namespace=k8s.io Dec 13 01:34:29.748725 containerd[1957]: time="2024-12-13T01:34:29.747775565Z" level=warning msg="cleaning up after shim disconnected" id=e11d7addd930721345dac54f8564062363657e74e17d82582acc70486834711f namespace=k8s.io Dec 13 01:34:29.748725 containerd[1957]: time="2024-12-13T01:34:29.747792355Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:34:29.858405 kubelet[3344]: E1213 01:34:29.857339 3344 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-px6z6" podUID="4e07e2b6-7463-46e7-b83f-d2250171da40" Dec 13 01:34:29.861223 kubelet[3344]: E1213 01:34:29.859968 3344 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-2l6ks" podUID="b2c17690-4f8b-4fd9-a6e4-751163da93da" Dec 13 01:34:30.320749 systemd[1]: run-containerd-runc-k8s.io-e11d7addd930721345dac54f8564062363657e74e17d82582acc70486834711f-runc.P06Iee.mount: Deactivated successfully. Dec 13 01:34:30.321069 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e11d7addd930721345dac54f8564062363657e74e17d82582acc70486834711f-rootfs.mount: Deactivated successfully. Dec 13 01:34:30.539174 containerd[1957]: time="2024-12-13T01:34:30.538638130Z" level=info msg="CreateContainer within sandbox \"7e79d3dc911c753a3c21cd935b58e24cb832d01ff9ae39d310b79c3f64acbef9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:34:30.612857 containerd[1957]: time="2024-12-13T01:34:30.612353782Z" level=info msg="CreateContainer within sandbox \"7e79d3dc911c753a3c21cd935b58e24cb832d01ff9ae39d310b79c3f64acbef9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3a50817b56954980d729262319fd7cfe8a1d0072d2847293e5b11a89c2a27e39\"" Dec 13 01:34:30.615892 containerd[1957]: time="2024-12-13T01:34:30.613356903Z" level=info msg="StartContainer for \"3a50817b56954980d729262319fd7cfe8a1d0072d2847293e5b11a89c2a27e39\"" Dec 13 01:34:30.663721 systemd[1]: Started cri-containerd-3a50817b56954980d729262319fd7cfe8a1d0072d2847293e5b11a89c2a27e39.scope - libcontainer container 3a50817b56954980d729262319fd7cfe8a1d0072d2847293e5b11a89c2a27e39. Dec 13 01:34:30.708726 containerd[1957]: time="2024-12-13T01:34:30.708675818Z" level=info msg="StartContainer for \"3a50817b56954980d729262319fd7cfe8a1d0072d2847293e5b11a89c2a27e39\" returns successfully" Dec 13 01:34:30.720225 systemd[1]: cri-containerd-3a50817b56954980d729262319fd7cfe8a1d0072d2847293e5b11a89c2a27e39.scope: Deactivated successfully. Dec 13 01:34:30.775921 containerd[1957]: time="2024-12-13T01:34:30.775843338Z" level=info msg="shim disconnected" id=3a50817b56954980d729262319fd7cfe8a1d0072d2847293e5b11a89c2a27e39 namespace=k8s.io Dec 13 01:34:30.775921 containerd[1957]: time="2024-12-13T01:34:30.775915701Z" level=warning msg="cleaning up after shim disconnected" id=3a50817b56954980d729262319fd7cfe8a1d0072d2847293e5b11a89c2a27e39 namespace=k8s.io Dec 13 01:34:30.775921 containerd[1957]: time="2024-12-13T01:34:30.775930353Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:34:31.181433 kubelet[3344]: E1213 01:34:31.181386 3344 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:34:31.318409 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a50817b56954980d729262319fd7cfe8a1d0072d2847293e5b11a89c2a27e39-rootfs.mount: Deactivated successfully. Dec 13 01:34:31.543963 containerd[1957]: time="2024-12-13T01:34:31.543888843Z" level=info msg="CreateContainer within sandbox \"7e79d3dc911c753a3c21cd935b58e24cb832d01ff9ae39d310b79c3f64acbef9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:34:31.577583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3170323647.mount: Deactivated successfully. Dec 13 01:34:31.582814 containerd[1957]: time="2024-12-13T01:34:31.582761373Z" level=info msg="CreateContainer within sandbox \"7e79d3dc911c753a3c21cd935b58e24cb832d01ff9ae39d310b79c3f64acbef9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b7176d040f41a7423ede1bc69cfcf6dfe5afaf52e05542f55f4280bc211a519c\"" Dec 13 01:34:31.585131 containerd[1957]: time="2024-12-13T01:34:31.583789075Z" level=info msg="StartContainer for \"b7176d040f41a7423ede1bc69cfcf6dfe5afaf52e05542f55f4280bc211a519c\"" Dec 13 01:34:31.636670 systemd[1]: Started cri-containerd-b7176d040f41a7423ede1bc69cfcf6dfe5afaf52e05542f55f4280bc211a519c.scope - libcontainer container b7176d040f41a7423ede1bc69cfcf6dfe5afaf52e05542f55f4280bc211a519c. Dec 13 01:34:31.674560 systemd[1]: cri-containerd-b7176d040f41a7423ede1bc69cfcf6dfe5afaf52e05542f55f4280bc211a519c.scope: Deactivated successfully. Dec 13 01:34:31.678604 containerd[1957]: time="2024-12-13T01:34:31.678563266Z" level=info msg="StartContainer for \"b7176d040f41a7423ede1bc69cfcf6dfe5afaf52e05542f55f4280bc211a519c\" returns successfully" Dec 13 01:34:31.719743 containerd[1957]: time="2024-12-13T01:34:31.719667506Z" level=info msg="shim disconnected" id=b7176d040f41a7423ede1bc69cfcf6dfe5afaf52e05542f55f4280bc211a519c namespace=k8s.io Dec 13 01:34:31.719743 containerd[1957]: time="2024-12-13T01:34:31.719734368Z" level=warning msg="cleaning up after shim disconnected" id=b7176d040f41a7423ede1bc69cfcf6dfe5afaf52e05542f55f4280bc211a519c namespace=k8s.io Dec 13 01:34:31.719743 containerd[1957]: time="2024-12-13T01:34:31.719746364Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:34:31.858095 kubelet[3344]: E1213 01:34:31.857537 3344 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-px6z6" podUID="4e07e2b6-7463-46e7-b83f-d2250171da40" Dec 13 01:34:31.859826 kubelet[3344]: E1213 01:34:31.859608 3344 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-2l6ks" podUID="b2c17690-4f8b-4fd9-a6e4-751163da93da" Dec 13 01:34:32.318906 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b7176d040f41a7423ede1bc69cfcf6dfe5afaf52e05542f55f4280bc211a519c-rootfs.mount: Deactivated successfully. Dec 13 01:34:32.555381 containerd[1957]: time="2024-12-13T01:34:32.554151197Z" level=info msg="CreateContainer within sandbox \"7e79d3dc911c753a3c21cd935b58e24cb832d01ff9ae39d310b79c3f64acbef9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:34:32.596951 containerd[1957]: time="2024-12-13T01:34:32.596494130Z" level=info msg="CreateContainer within sandbox \"7e79d3dc911c753a3c21cd935b58e24cb832d01ff9ae39d310b79c3f64acbef9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6dadb0eac4486940cd129c0664353fc465130163e2ae222afececb15e4aac91b\"" Dec 13 01:34:32.598777 containerd[1957]: time="2024-12-13T01:34:32.597729972Z" level=info msg="StartContainer for \"6dadb0eac4486940cd129c0664353fc465130163e2ae222afececb15e4aac91b\"" Dec 13 01:34:32.670538 systemd[1]: Started cri-containerd-6dadb0eac4486940cd129c0664353fc465130163e2ae222afececb15e4aac91b.scope - libcontainer container 6dadb0eac4486940cd129c0664353fc465130163e2ae222afececb15e4aac91b. Dec 13 01:34:32.719352 containerd[1957]: time="2024-12-13T01:34:32.717078955Z" level=info msg="StartContainer for \"6dadb0eac4486940cd129c0664353fc465130163e2ae222afececb15e4aac91b\" returns successfully" Dec 13 01:34:33.511411 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 01:34:33.859377 kubelet[3344]: E1213 01:34:33.858152 3344 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-px6z6" podUID="4e07e2b6-7463-46e7-b83f-d2250171da40" Dec 13 01:34:33.859377 kubelet[3344]: E1213 01:34:33.859128 3344 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-2l6ks" podUID="b2c17690-4f8b-4fd9-a6e4-751163da93da" Dec 13 01:34:35.465461 systemd[1]: run-containerd-runc-k8s.io-6dadb0eac4486940cd129c0664353fc465130163e2ae222afececb15e4aac91b-runc.WPyuYG.mount: Deactivated successfully. Dec 13 01:34:35.859321 kubelet[3344]: E1213 01:34:35.857610 3344 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-px6z6" podUID="4e07e2b6-7463-46e7-b83f-d2250171da40" Dec 13 01:34:35.860994 kubelet[3344]: E1213 01:34:35.860961 3344 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-2l6ks" podUID="b2c17690-4f8b-4fd9-a6e4-751163da93da" Dec 13 01:34:35.887544 containerd[1957]: time="2024-12-13T01:34:35.887356207Z" level=info msg="StopPodSandbox for \"e9c4b6d4763b6f6a65271cf09d0c3a0f518f6df01d93895b3f0e2eaf69fd2b77\"" Dec 13 01:34:35.888026 containerd[1957]: time="2024-12-13T01:34:35.887656043Z" level=info msg="TearDown network for sandbox \"e9c4b6d4763b6f6a65271cf09d0c3a0f518f6df01d93895b3f0e2eaf69fd2b77\" successfully" Dec 13 01:34:35.888026 containerd[1957]: time="2024-12-13T01:34:35.887675330Z" level=info msg="StopPodSandbox for \"e9c4b6d4763b6f6a65271cf09d0c3a0f518f6df01d93895b3f0e2eaf69fd2b77\" returns successfully" Dec 13 01:34:35.889374 containerd[1957]: time="2024-12-13T01:34:35.889335379Z" level=info msg="RemovePodSandbox for \"e9c4b6d4763b6f6a65271cf09d0c3a0f518f6df01d93895b3f0e2eaf69fd2b77\"" Dec 13 01:34:35.898321 containerd[1957]: time="2024-12-13T01:34:35.895052351Z" level=info msg="Forcibly stopping sandbox \"e9c4b6d4763b6f6a65271cf09d0c3a0f518f6df01d93895b3f0e2eaf69fd2b77\"" Dec 13 01:34:35.898321 containerd[1957]: time="2024-12-13T01:34:35.896265252Z" level=info msg="TearDown network for sandbox \"e9c4b6d4763b6f6a65271cf09d0c3a0f518f6df01d93895b3f0e2eaf69fd2b77\" successfully" Dec 13 01:34:35.906124 containerd[1957]: time="2024-12-13T01:34:35.906069901Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e9c4b6d4763b6f6a65271cf09d0c3a0f518f6df01d93895b3f0e2eaf69fd2b77\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:34:35.906323 containerd[1957]: time="2024-12-13T01:34:35.906298792Z" level=info msg="RemovePodSandbox \"e9c4b6d4763b6f6a65271cf09d0c3a0f518f6df01d93895b3f0e2eaf69fd2b77\" returns successfully" Dec 13 01:34:35.907172 containerd[1957]: time="2024-12-13T01:34:35.907141803Z" level=info msg="StopPodSandbox for \"c4c1bdcd99486f2a52bc163f9b82eaa77ee736e9e6777808f39c583d3707fad1\"" Dec 13 01:34:35.907273 containerd[1957]: time="2024-12-13T01:34:35.907243292Z" level=info msg="TearDown network for sandbox \"c4c1bdcd99486f2a52bc163f9b82eaa77ee736e9e6777808f39c583d3707fad1\" successfully" Dec 13 01:34:35.907273 containerd[1957]: time="2024-12-13T01:34:35.907262789Z" level=info msg="StopPodSandbox for \"c4c1bdcd99486f2a52bc163f9b82eaa77ee736e9e6777808f39c583d3707fad1\" returns successfully" Dec 13 01:34:35.907694 containerd[1957]: time="2024-12-13T01:34:35.907656912Z" level=info msg="RemovePodSandbox for \"c4c1bdcd99486f2a52bc163f9b82eaa77ee736e9e6777808f39c583d3707fad1\"" Dec 13 01:34:35.907694 containerd[1957]: time="2024-12-13T01:34:35.907690954Z" level=info msg="Forcibly stopping sandbox \"c4c1bdcd99486f2a52bc163f9b82eaa77ee736e9e6777808f39c583d3707fad1\"" Dec 13 01:34:35.907836 containerd[1957]: time="2024-12-13T01:34:35.907752620Z" level=info msg="TearDown network for sandbox \"c4c1bdcd99486f2a52bc163f9b82eaa77ee736e9e6777808f39c583d3707fad1\" successfully" Dec 13 01:34:35.916912 containerd[1957]: time="2024-12-13T01:34:35.916865915Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c4c1bdcd99486f2a52bc163f9b82eaa77ee736e9e6777808f39c583d3707fad1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:34:35.917465 containerd[1957]: time="2024-12-13T01:34:35.917148004Z" level=info msg="RemovePodSandbox \"c4c1bdcd99486f2a52bc163f9b82eaa77ee736e9e6777808f39c583d3707fad1\" returns successfully" Dec 13 01:34:37.524923 systemd-networkd[1809]: lxc_health: Link UP Dec 13 01:34:37.537467 (udev-worker)[5979]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:34:37.539966 systemd-networkd[1809]: lxc_health: Gained carrier Dec 13 01:34:37.770778 systemd[1]: run-containerd-runc-k8s.io-6dadb0eac4486940cd129c0664353fc465130163e2ae222afececb15e4aac91b-runc.qktMcy.mount: Deactivated successfully. Dec 13 01:34:38.656460 systemd-networkd[1809]: lxc_health: Gained IPv6LL Dec 13 01:34:38.719889 kubelet[3344]: I1213 01:34:38.719841 3344 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-7tcr5" podStartSLOduration=11.719790059 podStartE2EDuration="11.719790059s" podCreationTimestamp="2024-12-13 01:34:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:34:33.594239845 +0000 UTC m=+117.985550801" watchObservedRunningTime="2024-12-13 01:34:38.719790059 +0000 UTC m=+123.111101013" Dec 13 01:34:41.427809 ntpd[1940]: Listen normally on 14 lxc_health [fe80::5882:5fff:fea9:4cd4%14]:123 Dec 13 01:34:41.428231 ntpd[1940]: 13 Dec 01:34:41 ntpd[1940]: Listen normally on 14 lxc_health [fe80::5882:5fff:fea9:4cd4%14]:123 Dec 13 01:34:44.735104 systemd[1]: run-containerd-runc-k8s.io-6dadb0eac4486940cd129c0664353fc465130163e2ae222afececb15e4aac91b-runc.Gawsej.mount: Deactivated successfully. Dec 13 01:34:44.820577 kubelet[3344]: E1213 01:34:44.820492 3344 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:40466->127.0.0.1:40129: write tcp 127.0.0.1:40466->127.0.0.1:40129: write: broken pipe Dec 13 01:34:44.899178 sshd[5137]: pam_unix(sshd:session): session closed for user core Dec 13 01:34:44.916646 systemd[1]: sshd@29-172.31.29.36:22-139.178.68.195:52186.service: Deactivated successfully. Dec 13 01:34:44.919620 systemd[1]: session-30.scope: Deactivated successfully. Dec 13 01:34:44.921062 systemd-logind[1946]: Session 30 logged out. Waiting for processes to exit. Dec 13 01:34:44.922656 systemd-logind[1946]: Removed session 30. Dec 13 01:34:58.701361 kubelet[3344]: E1213 01:34:58.701304 3344 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-36?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 01:35:00.048173 systemd[1]: cri-containerd-ef85f3543b698674432a81a7dc2cd8f1cd97cfa30b1942753ec4c02a7c4e0075.scope: Deactivated successfully. Dec 13 01:35:00.048716 systemd[1]: cri-containerd-ef85f3543b698674432a81a7dc2cd8f1cd97cfa30b1942753ec4c02a7c4e0075.scope: Consumed 3.266s CPU time, 30.1M memory peak, 0B memory swap peak. Dec 13 01:35:00.089665 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef85f3543b698674432a81a7dc2cd8f1cd97cfa30b1942753ec4c02a7c4e0075-rootfs.mount: Deactivated successfully. Dec 13 01:35:00.127552 containerd[1957]: time="2024-12-13T01:35:00.127458997Z" level=info msg="shim disconnected" id=ef85f3543b698674432a81a7dc2cd8f1cd97cfa30b1942753ec4c02a7c4e0075 namespace=k8s.io Dec 13 01:35:00.127552 containerd[1957]: time="2024-12-13T01:35:00.127545066Z" level=warning msg="cleaning up after shim disconnected" id=ef85f3543b698674432a81a7dc2cd8f1cd97cfa30b1942753ec4c02a7c4e0075 namespace=k8s.io Dec 13 01:35:00.127552 containerd[1957]: time="2024-12-13T01:35:00.127557377Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:35:00.641730 kubelet[3344]: I1213 01:35:00.641698 3344 scope.go:117] "RemoveContainer" containerID="ef85f3543b698674432a81a7dc2cd8f1cd97cfa30b1942753ec4c02a7c4e0075" Dec 13 01:35:00.644770 containerd[1957]: time="2024-12-13T01:35:00.644726544Z" level=info msg="CreateContainer within sandbox \"fb592d3cdb5b66650e959f51c1ad281b0ec77d93221218c90b4bb9d33ec2a833\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 13 01:35:00.674111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2178904926.mount: Deactivated successfully. Dec 13 01:35:00.682822 containerd[1957]: time="2024-12-13T01:35:00.682765282Z" level=info msg="CreateContainer within sandbox \"fb592d3cdb5b66650e959f51c1ad281b0ec77d93221218c90b4bb9d33ec2a833\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"93186a28905772b2f5bdaad7a9c0c90cab4591841936d72d3c47d243e98b3a19\"" Dec 13 01:35:00.685179 containerd[1957]: time="2024-12-13T01:35:00.683841880Z" level=info msg="StartContainer for \"93186a28905772b2f5bdaad7a9c0c90cab4591841936d72d3c47d243e98b3a19\"" Dec 13 01:35:00.726894 systemd[1]: Started cri-containerd-93186a28905772b2f5bdaad7a9c0c90cab4591841936d72d3c47d243e98b3a19.scope - libcontainer container 93186a28905772b2f5bdaad7a9c0c90cab4591841936d72d3c47d243e98b3a19. Dec 13 01:35:00.786813 containerd[1957]: time="2024-12-13T01:35:00.786759306Z" level=info msg="StartContainer for \"93186a28905772b2f5bdaad7a9c0c90cab4591841936d72d3c47d243e98b3a19\" returns successfully" Dec 13 01:35:05.326594 systemd[1]: cri-containerd-15268a03bf6f0e690a28fca67c951c8b35e3587d4f14f1df61d120b2c38fa961.scope: Deactivated successfully. Dec 13 01:35:05.327635 systemd[1]: cri-containerd-15268a03bf6f0e690a28fca67c951c8b35e3587d4f14f1df61d120b2c38fa961.scope: Consumed 1.395s CPU time, 19.3M memory peak, 0B memory swap peak. Dec 13 01:35:05.387662 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15268a03bf6f0e690a28fca67c951c8b35e3587d4f14f1df61d120b2c38fa961-rootfs.mount: Deactivated successfully. Dec 13 01:35:05.421397 containerd[1957]: time="2024-12-13T01:35:05.421084157Z" level=info msg="shim disconnected" id=15268a03bf6f0e690a28fca67c951c8b35e3587d4f14f1df61d120b2c38fa961 namespace=k8s.io Dec 13 01:35:05.421397 containerd[1957]: time="2024-12-13T01:35:05.421156165Z" level=warning msg="cleaning up after shim disconnected" id=15268a03bf6f0e690a28fca67c951c8b35e3587d4f14f1df61d120b2c38fa961 namespace=k8s.io Dec 13 01:35:05.421397 containerd[1957]: time="2024-12-13T01:35:05.421171069Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:35:05.665482 kubelet[3344]: I1213 01:35:05.664874 3344 scope.go:117] "RemoveContainer" containerID="15268a03bf6f0e690a28fca67c951c8b35e3587d4f14f1df61d120b2c38fa961" Dec 13 01:35:05.668301 containerd[1957]: time="2024-12-13T01:35:05.668051968Z" level=info msg="CreateContainer within sandbox \"e6748d13b23b0326004ab7d2e7346a4adb87eddd6b888c02ef92da0fcf0458af\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 13 01:35:05.698798 containerd[1957]: time="2024-12-13T01:35:05.698754337Z" level=info msg="CreateContainer within sandbox \"e6748d13b23b0326004ab7d2e7346a4adb87eddd6b888c02ef92da0fcf0458af\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"5b6b19ce412740a1fd5161078988985467d0b29895543a5da11a4e9045174852\"" Dec 13 01:35:05.700235 containerd[1957]: time="2024-12-13T01:35:05.700166898Z" level=info msg="StartContainer for \"5b6b19ce412740a1fd5161078988985467d0b29895543a5da11a4e9045174852\"" Dec 13 01:35:05.754141 systemd[1]: Started cri-containerd-5b6b19ce412740a1fd5161078988985467d0b29895543a5da11a4e9045174852.scope - libcontainer container 5b6b19ce412740a1fd5161078988985467d0b29895543a5da11a4e9045174852. Dec 13 01:35:05.814119 containerd[1957]: time="2024-12-13T01:35:05.814070635Z" level=info msg="StartContainer for \"5b6b19ce412740a1fd5161078988985467d0b29895543a5da11a4e9045174852\" returns successfully" Dec 13 01:35:08.701665 kubelet[3344]: E1213 01:35:08.701627 3344 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-36?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 01:35:18.703005 kubelet[3344]: E1213 01:35:18.702659 3344 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-36?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"