Mar 25 01:15:46.175004 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Mar 25 01:15:46.175052 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Mon Mar 24 23:39:14 -00 2025 Mar 25 01:15:46.175077 kernel: KASLR disabled due to lack of seed Mar 25 01:15:46.175093 kernel: efi: EFI v2.7 by EDK II Mar 25 01:15:46.175109 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a736a98 MEMRESERVE=0x78551598 Mar 25 01:15:46.175124 kernel: secureboot: Secure boot disabled Mar 25 01:15:46.175141 kernel: ACPI: Early table checksum verification disabled Mar 25 01:15:46.175156 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Mar 25 01:15:46.175172 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Mar 25 01:15:46.175187 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 25 01:15:46.175235 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Mar 25 01:15:46.175254 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 25 01:15:46.175270 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Mar 25 01:15:46.175286 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Mar 25 01:15:46.175304 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Mar 25 01:15:46.175327 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 25 01:15:46.175344 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Mar 25 01:15:46.175360 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Mar 25 01:15:46.175376 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Mar 25 01:15:46.175392 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Mar 25 01:15:46.175408 kernel: printk: bootconsole [uart0] enabled Mar 25 01:15:46.175424 kernel: NUMA: Failed to initialise from firmware Mar 25 01:15:46.175441 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Mar 25 01:15:46.175459 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Mar 25 01:15:46.175476 kernel: Zone ranges: Mar 25 01:15:46.175492 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 25 01:15:46.175515 kernel: DMA32 empty Mar 25 01:15:46.175532 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Mar 25 01:15:46.175549 kernel: Movable zone start for each node Mar 25 01:15:46.175566 kernel: Early memory node ranges Mar 25 01:15:46.175582 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Mar 25 01:15:46.175598 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Mar 25 01:15:46.175614 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Mar 25 01:15:46.175631 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Mar 25 01:15:46.175647 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Mar 25 01:15:46.175663 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Mar 25 01:15:46.175679 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Mar 25 01:15:46.175696 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Mar 25 01:15:46.175717 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Mar 25 01:15:46.175735 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Mar 25 01:15:46.175759 kernel: psci: probing for conduit method from ACPI. Mar 25 01:15:46.175776 kernel: psci: PSCIv1.0 detected in firmware. Mar 25 01:15:46.175794 kernel: psci: Using standard PSCI v0.2 function IDs Mar 25 01:15:46.175816 kernel: psci: Trusted OS migration not required Mar 25 01:15:46.175834 kernel: psci: SMC Calling Convention v1.1 Mar 25 01:15:46.175850 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 25 01:15:46.175867 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 25 01:15:46.175885 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 25 01:15:46.175901 kernel: Detected PIPT I-cache on CPU0 Mar 25 01:15:46.175918 kernel: CPU features: detected: GIC system register CPU interface Mar 25 01:15:46.175935 kernel: CPU features: detected: Spectre-v2 Mar 25 01:15:46.175952 kernel: CPU features: detected: Spectre-v3a Mar 25 01:15:46.175969 kernel: CPU features: detected: Spectre-BHB Mar 25 01:15:46.175986 kernel: CPU features: detected: ARM erratum 1742098 Mar 25 01:15:46.176002 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Mar 25 01:15:46.176024 kernel: alternatives: applying boot alternatives Mar 25 01:15:46.176043 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=b84e5f613acd6cd0a8a878f32f5653a14f2e6fb2820997fecd5b2bd33a4ba3ab Mar 25 01:15:46.176062 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 25 01:15:46.176079 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 25 01:15:46.176096 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 25 01:15:46.176113 kernel: Fallback order for Node 0: 0 Mar 25 01:15:46.176130 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Mar 25 01:15:46.176146 kernel: Policy zone: Normal Mar 25 01:15:46.176163 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 25 01:15:46.176180 kernel: software IO TLB: area num 2. Mar 25 01:15:46.176943 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Mar 25 01:15:46.176976 kernel: Memory: 3821112K/4030464K available (10304K kernel code, 2186K rwdata, 8096K rodata, 38464K init, 897K bss, 209352K reserved, 0K cma-reserved) Mar 25 01:15:46.176995 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 25 01:15:46.177013 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 25 01:15:46.177032 kernel: rcu: RCU event tracing is enabled. Mar 25 01:15:46.177050 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 25 01:15:46.177068 kernel: Trampoline variant of Tasks RCU enabled. Mar 25 01:15:46.177085 kernel: Tracing variant of Tasks RCU enabled. Mar 25 01:15:46.177102 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 25 01:15:46.177120 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 25 01:15:46.177137 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 25 01:15:46.177161 kernel: GICv3: 96 SPIs implemented Mar 25 01:15:46.177178 kernel: GICv3: 0 Extended SPIs implemented Mar 25 01:15:46.177195 kernel: Root IRQ handler: gic_handle_irq Mar 25 01:15:46.177244 kernel: GICv3: GICv3 features: 16 PPIs Mar 25 01:15:46.177263 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Mar 25 01:15:46.177280 kernel: ITS [mem 0x10080000-0x1009ffff] Mar 25 01:15:46.177297 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Mar 25 01:15:46.177315 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Mar 25 01:15:46.177332 kernel: GICv3: using LPI property table @0x00000004000d0000 Mar 25 01:15:46.177349 kernel: ITS: Using hypervisor restricted LPI range [128] Mar 25 01:15:46.177366 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Mar 25 01:15:46.177383 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 25 01:15:46.177407 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Mar 25 01:15:46.177425 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Mar 25 01:15:46.177442 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Mar 25 01:15:46.177460 kernel: Console: colour dummy device 80x25 Mar 25 01:15:46.177478 kernel: printk: console [tty1] enabled Mar 25 01:15:46.177495 kernel: ACPI: Core revision 20230628 Mar 25 01:15:46.177513 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Mar 25 01:15:46.177530 kernel: pid_max: default: 32768 minimum: 301 Mar 25 01:15:46.177548 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 25 01:15:46.177565 kernel: landlock: Up and running. Mar 25 01:15:46.177587 kernel: SELinux: Initializing. Mar 25 01:15:46.177604 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 25 01:15:46.177622 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 25 01:15:46.177639 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 25 01:15:46.177657 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 25 01:15:46.177674 kernel: rcu: Hierarchical SRCU implementation. Mar 25 01:15:46.177692 kernel: rcu: Max phase no-delay instances is 400. Mar 25 01:15:46.177709 kernel: Platform MSI: ITS@0x10080000 domain created Mar 25 01:15:46.177731 kernel: PCI/MSI: ITS@0x10080000 domain created Mar 25 01:15:46.177748 kernel: Remapping and enabling EFI services. Mar 25 01:15:46.177765 kernel: smp: Bringing up secondary CPUs ... Mar 25 01:15:46.177783 kernel: Detected PIPT I-cache on CPU1 Mar 25 01:15:46.177800 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Mar 25 01:15:46.177818 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Mar 25 01:15:46.177835 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Mar 25 01:15:46.177852 kernel: smp: Brought up 1 node, 2 CPUs Mar 25 01:15:46.177869 kernel: SMP: Total of 2 processors activated. Mar 25 01:15:46.177886 kernel: CPU features: detected: 32-bit EL0 Support Mar 25 01:15:46.177908 kernel: CPU features: detected: 32-bit EL1 Support Mar 25 01:15:46.177925 kernel: CPU features: detected: CRC32 instructions Mar 25 01:15:46.177954 kernel: CPU: All CPU(s) started at EL1 Mar 25 01:15:46.177977 kernel: alternatives: applying system-wide alternatives Mar 25 01:15:46.177995 kernel: devtmpfs: initialized Mar 25 01:15:46.178013 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 25 01:15:46.178031 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 25 01:15:46.178049 kernel: pinctrl core: initialized pinctrl subsystem Mar 25 01:15:46.178067 kernel: SMBIOS 3.0.0 present. Mar 25 01:15:46.178090 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Mar 25 01:15:46.178109 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 25 01:15:46.178127 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 25 01:15:46.178145 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 25 01:15:46.178164 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 25 01:15:46.178182 kernel: audit: initializing netlink subsys (disabled) Mar 25 01:15:46.178215 kernel: audit: type=2000 audit(0.218:1): state=initialized audit_enabled=0 res=1 Mar 25 01:15:46.178270 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 25 01:15:46.178289 kernel: cpuidle: using governor menu Mar 25 01:15:46.178307 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 25 01:15:46.178326 kernel: ASID allocator initialised with 65536 entries Mar 25 01:15:46.178344 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 25 01:15:46.178362 kernel: Serial: AMBA PL011 UART driver Mar 25 01:15:46.178379 kernel: Modules: 17728 pages in range for non-PLT usage Mar 25 01:15:46.178397 kernel: Modules: 509248 pages in range for PLT usage Mar 25 01:15:46.178415 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 25 01:15:46.178438 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 25 01:15:46.178457 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 25 01:15:46.178475 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 25 01:15:46.178493 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 25 01:15:46.178511 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 25 01:15:46.178528 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 25 01:15:46.178547 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 25 01:15:46.178566 kernel: ACPI: Added _OSI(Module Device) Mar 25 01:15:46.178584 kernel: ACPI: Added _OSI(Processor Device) Mar 25 01:15:46.178606 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 25 01:15:46.178624 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 25 01:15:46.178642 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 25 01:15:46.178660 kernel: ACPI: Interpreter enabled Mar 25 01:15:46.178678 kernel: ACPI: Using GIC for interrupt routing Mar 25 01:15:46.178696 kernel: ACPI: MCFG table detected, 1 entries Mar 25 01:15:46.178714 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Mar 25 01:15:46.179044 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 25 01:15:46.179296 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 25 01:15:46.179612 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 25 01:15:46.179915 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Mar 25 01:15:46.180136 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Mar 25 01:15:46.180161 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Mar 25 01:15:46.180180 kernel: acpiphp: Slot [1] registered Mar 25 01:15:46.180199 kernel: acpiphp: Slot [2] registered Mar 25 01:15:46.180262 kernel: acpiphp: Slot [3] registered Mar 25 01:15:46.180291 kernel: acpiphp: Slot [4] registered Mar 25 01:15:46.180324 kernel: acpiphp: Slot [5] registered Mar 25 01:15:46.180348 kernel: acpiphp: Slot [6] registered Mar 25 01:15:46.180366 kernel: acpiphp: Slot [7] registered Mar 25 01:15:46.180384 kernel: acpiphp: Slot [8] registered Mar 25 01:15:46.180402 kernel: acpiphp: Slot [9] registered Mar 25 01:15:46.180420 kernel: acpiphp: Slot [10] registered Mar 25 01:15:46.180438 kernel: acpiphp: Slot [11] registered Mar 25 01:15:46.180455 kernel: acpiphp: Slot [12] registered Mar 25 01:15:46.180473 kernel: acpiphp: Slot [13] registered Mar 25 01:15:46.180497 kernel: acpiphp: Slot [14] registered Mar 25 01:15:46.180515 kernel: acpiphp: Slot [15] registered Mar 25 01:15:46.180533 kernel: acpiphp: Slot [16] registered Mar 25 01:15:46.180550 kernel: acpiphp: Slot [17] registered Mar 25 01:15:46.180568 kernel: acpiphp: Slot [18] registered Mar 25 01:15:46.180586 kernel: acpiphp: Slot [19] registered Mar 25 01:15:46.180604 kernel: acpiphp: Slot [20] registered Mar 25 01:15:46.180622 kernel: acpiphp: Slot [21] registered Mar 25 01:15:46.180640 kernel: acpiphp: Slot [22] registered Mar 25 01:15:46.180662 kernel: acpiphp: Slot [23] registered Mar 25 01:15:46.180680 kernel: acpiphp: Slot [24] registered Mar 25 01:15:46.180698 kernel: acpiphp: Slot [25] registered Mar 25 01:15:46.180716 kernel: acpiphp: Slot [26] registered Mar 25 01:15:46.180734 kernel: acpiphp: Slot [27] registered Mar 25 01:15:46.180752 kernel: acpiphp: Slot [28] registered Mar 25 01:15:46.180770 kernel: acpiphp: Slot [29] registered Mar 25 01:15:46.180788 kernel: acpiphp: Slot [30] registered Mar 25 01:15:46.180806 kernel: acpiphp: Slot [31] registered Mar 25 01:15:46.180824 kernel: PCI host bridge to bus 0000:00 Mar 25 01:15:46.181045 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Mar 25 01:15:46.181264 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 25 01:15:46.181459 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Mar 25 01:15:46.181647 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Mar 25 01:15:46.181887 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Mar 25 01:15:46.182131 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Mar 25 01:15:46.182414 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Mar 25 01:15:46.182648 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Mar 25 01:15:46.182858 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Mar 25 01:15:46.183070 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 25 01:15:46.183359 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Mar 25 01:15:46.183571 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Mar 25 01:15:46.185521 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Mar 25 01:15:46.185809 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Mar 25 01:15:46.186022 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 25 01:15:46.189301 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Mar 25 01:15:46.189567 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Mar 25 01:15:46.189781 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Mar 25 01:15:46.189990 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Mar 25 01:15:46.191401 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Mar 25 01:15:46.194460 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Mar 25 01:15:46.194662 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 25 01:15:46.194847 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Mar 25 01:15:46.194873 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 25 01:15:46.194892 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 25 01:15:46.194911 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 25 01:15:46.194930 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 25 01:15:46.194948 kernel: iommu: Default domain type: Translated Mar 25 01:15:46.194976 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 25 01:15:46.194995 kernel: efivars: Registered efivars operations Mar 25 01:15:46.195012 kernel: vgaarb: loaded Mar 25 01:15:46.195030 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 25 01:15:46.195048 kernel: VFS: Disk quotas dquot_6.6.0 Mar 25 01:15:46.195066 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 25 01:15:46.195085 kernel: pnp: PnP ACPI init Mar 25 01:15:46.197718 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Mar 25 01:15:46.197777 kernel: pnp: PnP ACPI: found 1 devices Mar 25 01:15:46.197798 kernel: NET: Registered PF_INET protocol family Mar 25 01:15:46.197817 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 25 01:15:46.197836 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 25 01:15:46.197854 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 25 01:15:46.197873 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 25 01:15:46.197891 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 25 01:15:46.197911 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 25 01:15:46.197930 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 25 01:15:46.197953 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 25 01:15:46.197972 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 25 01:15:46.197990 kernel: PCI: CLS 0 bytes, default 64 Mar 25 01:15:46.198008 kernel: kvm [1]: HYP mode not available Mar 25 01:15:46.198026 kernel: Initialise system trusted keyrings Mar 25 01:15:46.198045 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 25 01:15:46.198063 kernel: Key type asymmetric registered Mar 25 01:15:46.198081 kernel: Asymmetric key parser 'x509' registered Mar 25 01:15:46.198099 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 25 01:15:46.198122 kernel: io scheduler mq-deadline registered Mar 25 01:15:46.198140 kernel: io scheduler kyber registered Mar 25 01:15:46.198158 kernel: io scheduler bfq registered Mar 25 01:15:46.198431 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Mar 25 01:15:46.198464 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 25 01:15:46.198483 kernel: ACPI: button: Power Button [PWRB] Mar 25 01:15:46.198502 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Mar 25 01:15:46.198521 kernel: ACPI: button: Sleep Button [SLPB] Mar 25 01:15:46.198546 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 25 01:15:46.198566 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 25 01:15:46.198788 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Mar 25 01:15:46.198815 kernel: printk: console [ttyS0] disabled Mar 25 01:15:46.198834 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Mar 25 01:15:46.198853 kernel: printk: console [ttyS0] enabled Mar 25 01:15:46.198871 kernel: printk: bootconsole [uart0] disabled Mar 25 01:15:46.198890 kernel: thunder_xcv, ver 1.0 Mar 25 01:15:46.198908 kernel: thunder_bgx, ver 1.0 Mar 25 01:15:46.198926 kernel: nicpf, ver 1.0 Mar 25 01:15:46.198950 kernel: nicvf, ver 1.0 Mar 25 01:15:46.199182 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 25 01:15:46.203592 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-25T01:15:45 UTC (1742865345) Mar 25 01:15:46.203634 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 25 01:15:46.203654 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Mar 25 01:15:46.203674 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 25 01:15:46.203692 kernel: watchdog: Hard watchdog permanently disabled Mar 25 01:15:46.203723 kernel: NET: Registered PF_INET6 protocol family Mar 25 01:15:46.203744 kernel: Segment Routing with IPv6 Mar 25 01:15:46.203763 kernel: In-situ OAM (IOAM) with IPv6 Mar 25 01:15:46.203782 kernel: NET: Registered PF_PACKET protocol family Mar 25 01:15:46.203801 kernel: Key type dns_resolver registered Mar 25 01:15:46.203820 kernel: registered taskstats version 1 Mar 25 01:15:46.203840 kernel: Loading compiled-in X.509 certificates Mar 25 01:15:46.203859 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: ed4ababe871f0afac8b4236504477de11a6baf07' Mar 25 01:15:46.203879 kernel: Key type .fscrypt registered Mar 25 01:15:46.203897 kernel: Key type fscrypt-provisioning registered Mar 25 01:15:46.203920 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 25 01:15:46.203938 kernel: ima: Allocated hash algorithm: sha1 Mar 25 01:15:46.203956 kernel: ima: No architecture policies found Mar 25 01:15:46.203975 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 25 01:15:46.203993 kernel: clk: Disabling unused clocks Mar 25 01:15:46.204011 kernel: Freeing unused kernel memory: 38464K Mar 25 01:15:46.204029 kernel: Run /init as init process Mar 25 01:15:46.204047 kernel: with arguments: Mar 25 01:15:46.204065 kernel: /init Mar 25 01:15:46.204088 kernel: with environment: Mar 25 01:15:46.204106 kernel: HOME=/ Mar 25 01:15:46.204124 kernel: TERM=linux Mar 25 01:15:46.204141 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 25 01:15:46.204161 systemd[1]: Successfully made /usr/ read-only. Mar 25 01:15:46.204187 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 25 01:15:46.204305 systemd[1]: Detected virtualization amazon. Mar 25 01:15:46.204354 systemd[1]: Detected architecture arm64. Mar 25 01:15:46.204375 systemd[1]: Running in initrd. Mar 25 01:15:46.204396 systemd[1]: No hostname configured, using default hostname. Mar 25 01:15:46.204417 systemd[1]: Hostname set to . Mar 25 01:15:46.204437 systemd[1]: Initializing machine ID from VM UUID. Mar 25 01:15:46.204457 systemd[1]: Queued start job for default target initrd.target. Mar 25 01:15:46.204477 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 01:15:46.204497 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 01:15:46.204519 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 25 01:15:46.204545 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 25 01:15:46.204565 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 25 01:15:46.204586 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 25 01:15:46.204609 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 25 01:15:46.204630 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 25 01:15:46.204651 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 01:15:46.204675 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 25 01:15:46.204695 systemd[1]: Reached target paths.target - Path Units. Mar 25 01:15:46.204715 systemd[1]: Reached target slices.target - Slice Units. Mar 25 01:15:46.204734 systemd[1]: Reached target swap.target - Swaps. Mar 25 01:15:46.204754 systemd[1]: Reached target timers.target - Timer Units. Mar 25 01:15:46.204774 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 25 01:15:46.204793 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 25 01:15:46.204813 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 25 01:15:46.204833 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 25 01:15:46.204858 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 25 01:15:46.204879 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 25 01:15:46.204898 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 01:15:46.204918 systemd[1]: Reached target sockets.target - Socket Units. Mar 25 01:15:46.204938 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 25 01:15:46.204958 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 25 01:15:46.204978 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 25 01:15:46.204997 systemd[1]: Starting systemd-fsck-usr.service... Mar 25 01:15:46.205021 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 25 01:15:46.205041 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 25 01:15:46.205061 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:15:46.205081 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 25 01:15:46.205102 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 01:15:46.205123 systemd[1]: Finished systemd-fsck-usr.service. Mar 25 01:15:46.205148 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 25 01:15:46.206311 systemd-journald[251]: Collecting audit messages is disabled. Mar 25 01:15:46.206374 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:15:46.206407 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 01:15:46.206429 systemd-journald[251]: Journal started Mar 25 01:15:46.206467 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2c468f40434427fcc494f3f015ccda) is 8M, max 75.3M, 67.3M free. Mar 25 01:15:46.182252 systemd-modules-load[253]: Inserted module 'overlay' Mar 25 01:15:46.209170 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 25 01:15:46.217737 systemd-modules-load[253]: Inserted module 'br_netfilter' Mar 25 01:15:46.219933 systemd[1]: Started systemd-journald.service - Journal Service. Mar 25 01:15:46.219980 kernel: Bridge firewalling registered Mar 25 01:15:46.228977 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 25 01:15:46.229957 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 25 01:15:46.236289 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 25 01:15:46.239526 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 25 01:15:46.247593 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 25 01:15:46.307581 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:15:46.316163 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:15:46.321829 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 01:15:46.325617 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 01:15:46.333110 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 25 01:15:46.348440 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 25 01:15:46.389558 dracut-cmdline[289]: dracut-dracut-053 Mar 25 01:15:46.396448 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=b84e5f613acd6cd0a8a878f32f5653a14f2e6fb2820997fecd5b2bd33a4ba3ab Mar 25 01:15:46.444387 systemd-resolved[290]: Positive Trust Anchors: Mar 25 01:15:46.444421 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 25 01:15:46.444483 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 25 01:15:46.557241 kernel: SCSI subsystem initialized Mar 25 01:15:46.567233 kernel: Loading iSCSI transport class v2.0-870. Mar 25 01:15:46.577244 kernel: iscsi: registered transport (tcp) Mar 25 01:15:46.599362 kernel: iscsi: registered transport (qla4xxx) Mar 25 01:15:46.599436 kernel: QLogic iSCSI HBA Driver Mar 25 01:15:46.658242 kernel: random: crng init done Mar 25 01:15:46.658500 systemd-resolved[290]: Defaulting to hostname 'linux'. Mar 25 01:15:46.663686 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 25 01:15:46.669275 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 25 01:15:46.691281 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 25 01:15:46.699460 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 25 01:15:46.744149 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 25 01:15:46.744262 kernel: device-mapper: uevent: version 1.0.3 Mar 25 01:15:46.744291 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 25 01:15:46.809274 kernel: raid6: neonx8 gen() 6531 MB/s Mar 25 01:15:46.826241 kernel: raid6: neonx4 gen() 6528 MB/s Mar 25 01:15:46.843235 kernel: raid6: neonx2 gen() 5440 MB/s Mar 25 01:15:46.860235 kernel: raid6: neonx1 gen() 3940 MB/s Mar 25 01:15:46.877249 kernel: raid6: int64x8 gen() 3606 MB/s Mar 25 01:15:46.894248 kernel: raid6: int64x4 gen() 3698 MB/s Mar 25 01:15:46.911237 kernel: raid6: int64x2 gen() 3597 MB/s Mar 25 01:15:46.928983 kernel: raid6: int64x1 gen() 2759 MB/s Mar 25 01:15:46.929016 kernel: raid6: using algorithm neonx8 gen() 6531 MB/s Mar 25 01:15:46.946962 kernel: raid6: .... xor() 4761 MB/s, rmw enabled Mar 25 01:15:46.947002 kernel: raid6: using neon recovery algorithm Mar 25 01:15:46.955049 kernel: xor: measuring software checksum speed Mar 25 01:15:46.955109 kernel: 8regs : 12933 MB/sec Mar 25 01:15:46.956234 kernel: 32regs : 11985 MB/sec Mar 25 01:15:46.958174 kernel: arm64_neon : 8712 MB/sec Mar 25 01:15:46.958236 kernel: xor: using function: 8regs (12933 MB/sec) Mar 25 01:15:47.041530 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 25 01:15:47.059099 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 25 01:15:47.077885 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 01:15:47.125340 systemd-udevd[472]: Using default interface naming scheme 'v255'. Mar 25 01:15:47.134733 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 01:15:47.139230 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 25 01:15:47.181324 dracut-pre-trigger[474]: rd.md=0: removing MD RAID activation Mar 25 01:15:47.234135 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 25 01:15:47.241436 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 25 01:15:47.359428 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 01:15:47.372474 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 25 01:15:47.414962 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 25 01:15:47.420114 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 25 01:15:47.433381 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 01:15:47.438117 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 25 01:15:47.445804 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 25 01:15:47.495381 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 25 01:15:47.556108 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 25 01:15:47.556182 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Mar 25 01:15:47.575951 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 25 01:15:47.576231 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 25 01:15:47.576520 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:aa:28:a9:b5:cb Mar 25 01:15:47.592900 (udev-worker)[525]: Network interface NamePolicy= disabled on kernel command line. Mar 25 01:15:47.598098 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 25 01:15:47.598366 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:15:47.608784 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 01:15:47.611064 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 25 01:15:47.611423 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:15:47.624353 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:15:47.631993 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:15:47.635704 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 25 01:15:47.651486 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 25 01:15:47.651548 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 25 01:15:47.661297 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 25 01:15:47.670478 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:15:47.685095 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 25 01:15:47.685134 kernel: GPT:9289727 != 16777215 Mar 25 01:15:47.685158 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 25 01:15:47.685182 kernel: GPT:9289727 != 16777215 Mar 25 01:15:47.685225 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 25 01:15:47.685265 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 25 01:15:47.687410 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 01:15:47.723742 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:15:47.788102 kernel: BTRFS: device fsid bf348154-9cb1-474d-801c-0e035a5758cf devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (515) Mar 25 01:15:47.811238 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by (udev-worker) (516) Mar 25 01:15:47.844902 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 25 01:15:47.902091 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 25 01:15:47.964386 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 25 01:15:47.989180 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 25 01:15:47.992627 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 25 01:15:48.004675 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 25 01:15:48.035679 disk-uuid[662]: Primary Header is updated. Mar 25 01:15:48.035679 disk-uuid[662]: Secondary Entries is updated. Mar 25 01:15:48.035679 disk-uuid[662]: Secondary Header is updated. Mar 25 01:15:48.049270 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 25 01:15:49.067278 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 25 01:15:49.068169 disk-uuid[663]: The operation has completed successfully. Mar 25 01:15:49.243456 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 25 01:15:49.243673 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 25 01:15:49.345477 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 25 01:15:49.368186 sh[923]: Success Mar 25 01:15:49.391287 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 25 01:15:49.509288 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 25 01:15:49.524859 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 25 01:15:49.546277 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 25 01:15:49.582478 kernel: BTRFS info (device dm-0): first mount of filesystem bf348154-9cb1-474d-801c-0e035a5758cf Mar 25 01:15:49.582541 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 25 01:15:49.584259 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 25 01:15:49.585501 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 25 01:15:49.586530 kernel: BTRFS info (device dm-0): using free space tree Mar 25 01:15:49.629241 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 25 01:15:49.633418 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 25 01:15:49.638249 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 25 01:15:49.649441 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 25 01:15:49.656405 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 25 01:15:49.725940 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 09629b08-d05c-4ce3-8bf7-615041c4b2c9 Mar 25 01:15:49.726014 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 25 01:15:49.726053 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 25 01:15:49.735298 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 25 01:15:49.742352 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 09629b08-d05c-4ce3-8bf7-615041c4b2c9 Mar 25 01:15:49.746404 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 25 01:15:49.754615 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 25 01:15:49.853025 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 25 01:15:49.864479 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 25 01:15:49.953077 ignition[1037]: Ignition 2.20.0 Mar 25 01:15:49.953587 ignition[1037]: Stage: fetch-offline Mar 25 01:15:49.953986 ignition[1037]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:15:49.954010 ignition[1037]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 25 01:15:49.958675 systemd-networkd[1114]: lo: Link UP Mar 25 01:15:49.954479 ignition[1037]: Ignition finished successfully Mar 25 01:15:49.958682 systemd-networkd[1114]: lo: Gained carrier Mar 25 01:15:49.962247 systemd-networkd[1114]: Enumeration completed Mar 25 01:15:49.962392 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 25 01:15:49.963869 systemd-networkd[1114]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:15:49.963876 systemd-networkd[1114]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 25 01:15:49.970899 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 25 01:15:49.977863 systemd[1]: Reached target network.target - Network. Mar 25 01:15:49.979577 systemd-networkd[1114]: eth0: Link UP Mar 25 01:15:49.979583 systemd-networkd[1114]: eth0: Gained carrier Mar 25 01:15:49.979600 systemd-networkd[1114]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:15:49.992441 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 25 01:15:50.005402 systemd-networkd[1114]: eth0: DHCPv4 address 172.31.23.121/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 25 01:15:50.045556 ignition[1123]: Ignition 2.20.0 Mar 25 01:15:50.046037 ignition[1123]: Stage: fetch Mar 25 01:15:50.046620 ignition[1123]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:15:50.046644 ignition[1123]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 25 01:15:50.046845 ignition[1123]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 25 01:15:50.064062 ignition[1123]: PUT result: OK Mar 25 01:15:50.068715 ignition[1123]: parsed url from cmdline: "" Mar 25 01:15:50.068741 ignition[1123]: no config URL provided Mar 25 01:15:50.068758 ignition[1123]: reading system config file "/usr/lib/ignition/user.ign" Mar 25 01:15:50.068785 ignition[1123]: no config at "/usr/lib/ignition/user.ign" Mar 25 01:15:50.068817 ignition[1123]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 25 01:15:50.074369 ignition[1123]: PUT result: OK Mar 25 01:15:50.074464 ignition[1123]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 25 01:15:50.082022 ignition[1123]: GET result: OK Mar 25 01:15:50.082171 ignition[1123]: parsing config with SHA512: 9a364b3b7c59de3ad3266240676c51679981209895df0fa78ec042f33ee23fbaf3bdb74ec4e5adcbafb97fff8d97f34ffbb7ca686b415c9f57205ffc6ae22f5e Mar 25 01:15:50.091114 unknown[1123]: fetched base config from "system" Mar 25 01:15:50.091827 ignition[1123]: fetch: fetch complete Mar 25 01:15:50.091131 unknown[1123]: fetched base config from "system" Mar 25 01:15:50.091839 ignition[1123]: fetch: fetch passed Mar 25 01:15:50.091144 unknown[1123]: fetched user config from "aws" Mar 25 01:15:50.091917 ignition[1123]: Ignition finished successfully Mar 25 01:15:50.108260 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 25 01:15:50.114811 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 25 01:15:50.159745 ignition[1130]: Ignition 2.20.0 Mar 25 01:15:50.160269 ignition[1130]: Stage: kargs Mar 25 01:15:50.160848 ignition[1130]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:15:50.160873 ignition[1130]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 25 01:15:50.161046 ignition[1130]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 25 01:15:50.164508 ignition[1130]: PUT result: OK Mar 25 01:15:50.182549 ignition[1130]: kargs: kargs passed Mar 25 01:15:50.182698 ignition[1130]: Ignition finished successfully Mar 25 01:15:50.193444 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 25 01:15:50.197344 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 25 01:15:50.233129 ignition[1137]: Ignition 2.20.0 Mar 25 01:15:50.233646 ignition[1137]: Stage: disks Mar 25 01:15:50.234235 ignition[1137]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:15:50.234262 ignition[1137]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 25 01:15:50.234444 ignition[1137]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 25 01:15:50.240196 ignition[1137]: PUT result: OK Mar 25 01:15:50.249856 ignition[1137]: disks: disks passed Mar 25 01:15:50.249959 ignition[1137]: Ignition finished successfully Mar 25 01:15:50.255367 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 25 01:15:50.259942 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 25 01:15:50.262242 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 25 01:15:50.264768 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 25 01:15:50.271776 systemd[1]: Reached target sysinit.target - System Initialization. Mar 25 01:15:50.278408 systemd[1]: Reached target basic.target - Basic System. Mar 25 01:15:50.284652 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 25 01:15:50.341372 systemd-fsck[1145]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 25 01:15:50.346910 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 25 01:15:50.358870 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 25 01:15:50.443586 kernel: EXT4-fs (nvme0n1p9): mounted filesystem a7a89271-ee7d-4bda-a834-705261d6cda9 r/w with ordered data mode. Quota mode: none. Mar 25 01:15:50.444753 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 25 01:15:50.450072 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 25 01:15:50.467798 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 25 01:15:50.475377 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 25 01:15:50.482650 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 25 01:15:50.482731 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 25 01:15:50.482780 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 25 01:15:50.511732 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 25 01:15:50.518113 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 25 01:15:50.538242 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1164) Mar 25 01:15:50.542921 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 09629b08-d05c-4ce3-8bf7-615041c4b2c9 Mar 25 01:15:50.542989 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 25 01:15:50.543016 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 25 01:15:50.556248 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 25 01:15:50.559318 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 25 01:15:50.625950 initrd-setup-root[1188]: cut: /sysroot/etc/passwd: No such file or directory Mar 25 01:15:50.637115 initrd-setup-root[1195]: cut: /sysroot/etc/group: No such file or directory Mar 25 01:15:50.647725 initrd-setup-root[1202]: cut: /sysroot/etc/shadow: No such file or directory Mar 25 01:15:50.656717 initrd-setup-root[1209]: cut: /sysroot/etc/gshadow: No such file or directory Mar 25 01:15:50.818302 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 25 01:15:50.821357 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 25 01:15:50.823485 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 25 01:15:50.858744 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 25 01:15:50.861298 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 09629b08-d05c-4ce3-8bf7-615041c4b2c9 Mar 25 01:15:50.894660 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 25 01:15:50.906995 ignition[1281]: INFO : Ignition 2.20.0 Mar 25 01:15:50.906995 ignition[1281]: INFO : Stage: mount Mar 25 01:15:50.911505 ignition[1281]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 01:15:50.911505 ignition[1281]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 25 01:15:50.911505 ignition[1281]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 25 01:15:50.919791 ignition[1281]: INFO : PUT result: OK Mar 25 01:15:50.925487 ignition[1281]: INFO : mount: mount passed Mar 25 01:15:50.927884 ignition[1281]: INFO : Ignition finished successfully Mar 25 01:15:50.931910 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 25 01:15:50.939350 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 25 01:15:50.969987 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 25 01:15:51.009250 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/nvme0n1p6 scanned by mount (1293) Mar 25 01:15:51.009688 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 09629b08-d05c-4ce3-8bf7-615041c4b2c9 Mar 25 01:15:51.012736 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 25 01:15:51.012771 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 25 01:15:51.019241 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 25 01:15:51.022507 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 25 01:15:51.062410 ignition[1310]: INFO : Ignition 2.20.0 Mar 25 01:15:51.062410 ignition[1310]: INFO : Stage: files Mar 25 01:15:51.067647 ignition[1310]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 01:15:51.067647 ignition[1310]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 25 01:15:51.067647 ignition[1310]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 25 01:15:51.067647 ignition[1310]: INFO : PUT result: OK Mar 25 01:15:51.085135 ignition[1310]: DEBUG : files: compiled without relabeling support, skipping Mar 25 01:15:51.089282 ignition[1310]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 25 01:15:51.089282 ignition[1310]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 25 01:15:51.100571 ignition[1310]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 25 01:15:51.104536 ignition[1310]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 25 01:15:51.108252 unknown[1310]: wrote ssh authorized keys file for user: core Mar 25 01:15:51.110842 ignition[1310]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 25 01:15:51.116235 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 25 01:15:51.120739 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Mar 25 01:15:51.157472 systemd-networkd[1114]: eth0: Gained IPv6LL Mar 25 01:15:51.299952 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 25 01:15:51.443311 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 25 01:15:51.443311 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 25 01:15:51.443311 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 25 01:15:51.787669 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 25 01:15:51.905680 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 25 01:15:51.905680 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 25 01:15:51.916795 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 25 01:15:51.916795 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 25 01:15:51.916795 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 25 01:15:51.916795 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 25 01:15:51.916795 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 25 01:15:51.916795 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 25 01:15:51.916795 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 25 01:15:51.916795 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 25 01:15:51.916795 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 25 01:15:51.916795 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 25 01:15:51.916795 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 25 01:15:51.916795 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 25 01:15:51.916795 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Mar 25 01:15:52.315140 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 25 01:15:52.622831 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 25 01:15:52.622831 ignition[1310]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 25 01:15:52.633514 ignition[1310]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 25 01:15:52.633514 ignition[1310]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 25 01:15:52.633514 ignition[1310]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 25 01:15:52.633514 ignition[1310]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 25 01:15:52.633514 ignition[1310]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 25 01:15:52.633514 ignition[1310]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 25 01:15:52.633514 ignition[1310]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 25 01:15:52.633514 ignition[1310]: INFO : files: files passed Mar 25 01:15:52.633514 ignition[1310]: INFO : Ignition finished successfully Mar 25 01:15:52.658256 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 25 01:15:52.668445 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 25 01:15:52.671784 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 25 01:15:52.700015 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 25 01:15:52.702426 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 25 01:15:52.717290 initrd-setup-root-after-ignition[1339]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 25 01:15:52.721337 initrd-setup-root-after-ignition[1339]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 25 01:15:52.724948 initrd-setup-root-after-ignition[1343]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 25 01:15:52.732252 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 25 01:15:52.736921 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 25 01:15:52.747419 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 25 01:15:52.809346 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 25 01:15:52.809755 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 25 01:15:52.819932 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 25 01:15:52.822901 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 25 01:15:52.824995 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 25 01:15:52.836647 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 25 01:15:52.880811 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 25 01:15:52.884433 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 25 01:15:52.917759 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 25 01:15:52.924648 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 01:15:52.927140 systemd[1]: Stopped target timers.target - Timer Units. Mar 25 01:15:52.929135 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 25 01:15:52.929403 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 25 01:15:52.940592 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 25 01:15:52.943341 systemd[1]: Stopped target basic.target - Basic System. Mar 25 01:15:52.945681 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 25 01:15:52.954910 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 25 01:15:52.957959 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 25 01:15:52.965220 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 25 01:15:52.967802 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 25 01:15:52.975475 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 25 01:15:52.978120 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 25 01:15:52.984868 systemd[1]: Stopped target swap.target - Swaps. Mar 25 01:15:52.986992 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 25 01:15:52.987610 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 25 01:15:52.995665 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 25 01:15:52.998345 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 01:15:53.001192 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 25 01:15:53.008140 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 01:15:53.010642 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 25 01:15:53.010869 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 25 01:15:53.013345 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 25 01:15:53.013563 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 25 01:15:53.016165 systemd[1]: ignition-files.service: Deactivated successfully. Mar 25 01:15:53.016411 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 25 01:15:53.036854 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 25 01:15:53.039370 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 25 01:15:53.039657 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 01:15:53.049367 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 25 01:15:53.055555 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 25 01:15:53.055931 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 01:15:53.064634 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 25 01:15:53.064866 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 25 01:15:53.092707 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 25 01:15:53.095361 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 25 01:15:53.113523 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 25 01:15:53.121947 ignition[1363]: INFO : Ignition 2.20.0 Mar 25 01:15:53.121947 ignition[1363]: INFO : Stage: umount Mar 25 01:15:53.127098 ignition[1363]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 01:15:53.127098 ignition[1363]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 25 01:15:53.127098 ignition[1363]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 25 01:15:53.129488 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 25 01:15:53.140451 ignition[1363]: INFO : PUT result: OK Mar 25 01:15:53.132421 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 25 01:15:53.146107 ignition[1363]: INFO : umount: umount passed Mar 25 01:15:53.151949 ignition[1363]: INFO : Ignition finished successfully Mar 25 01:15:53.148977 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 25 01:15:53.149186 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 25 01:15:53.155656 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 25 01:15:53.155813 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 25 01:15:53.165028 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 25 01:15:53.165134 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 25 01:15:53.167387 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 25 01:15:53.167468 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 25 01:15:53.169698 systemd[1]: Stopped target network.target - Network. Mar 25 01:15:53.171711 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 25 01:15:53.171796 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 25 01:15:53.174519 systemd[1]: Stopped target paths.target - Path Units. Mar 25 01:15:53.176532 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 25 01:15:53.197761 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 01:15:53.200577 systemd[1]: Stopped target slices.target - Slice Units. Mar 25 01:15:53.202353 systemd[1]: Stopped target sockets.target - Socket Units. Mar 25 01:15:53.204730 systemd[1]: iscsid.socket: Deactivated successfully. Mar 25 01:15:53.204813 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 25 01:15:53.216766 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 25 01:15:53.216858 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 25 01:15:53.223317 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 25 01:15:53.223434 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 25 01:15:53.229768 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 25 01:15:53.229865 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 25 01:15:53.236828 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 25 01:15:53.236919 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 25 01:15:53.239559 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 25 01:15:53.242397 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 25 01:15:53.257864 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 25 01:15:53.258285 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 25 01:15:53.270919 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 25 01:15:53.271777 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 25 01:15:53.271889 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 01:15:53.286158 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 25 01:15:53.291884 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 25 01:15:53.292095 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 25 01:15:53.302638 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 25 01:15:53.303054 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 25 01:15:53.303122 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 25 01:15:53.315973 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 25 01:15:53.320386 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 25 01:15:53.320512 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 25 01:15:53.332908 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 25 01:15:53.333026 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:15:53.344492 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 25 01:15:53.344595 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 25 01:15:53.347352 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 01:15:53.351600 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 25 01:15:53.377673 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 25 01:15:53.380545 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 01:15:53.388007 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 25 01:15:53.388789 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 25 01:15:53.393834 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 25 01:15:53.393961 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 25 01:15:53.396624 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 25 01:15:53.397189 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 01:15:53.401290 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 25 01:15:53.401386 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 25 01:15:53.404056 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 25 01:15:53.404145 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 25 01:15:53.406612 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 25 01:15:53.406692 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:15:53.412426 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 25 01:15:53.414604 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 25 01:15:53.414737 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 01:15:53.419678 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 25 01:15:53.419775 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:15:53.457257 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 25 01:15:53.458710 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 25 01:15:53.466672 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 25 01:15:53.470726 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 25 01:15:53.494526 systemd[1]: Switching root. Mar 25 01:15:53.530304 systemd-journald[251]: Journal stopped Mar 25 01:15:55.833129 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Mar 25 01:15:55.833276 kernel: SELinux: policy capability network_peer_controls=1 Mar 25 01:15:55.833312 kernel: SELinux: policy capability open_perms=1 Mar 25 01:15:55.833343 kernel: SELinux: policy capability extended_socket_class=1 Mar 25 01:15:55.833378 kernel: SELinux: policy capability always_check_network=0 Mar 25 01:15:55.833430 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 25 01:15:55.833463 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 25 01:15:55.833503 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 25 01:15:55.833532 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 25 01:15:55.833563 kernel: audit: type=1403 audit(1742865353.871:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 25 01:15:55.833602 systemd[1]: Successfully loaded SELinux policy in 50.728ms. Mar 25 01:15:55.833647 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 24.019ms. Mar 25 01:15:55.833680 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 25 01:15:55.833724 systemd[1]: Detected virtualization amazon. Mar 25 01:15:55.833753 systemd[1]: Detected architecture arm64. Mar 25 01:15:55.833786 systemd[1]: Detected first boot. Mar 25 01:15:55.833819 systemd[1]: Initializing machine ID from VM UUID. Mar 25 01:15:55.833847 zram_generator::config[1408]: No configuration found. Mar 25 01:15:55.833880 kernel: NET: Registered PF_VSOCK protocol family Mar 25 01:15:55.833909 systemd[1]: Populated /etc with preset unit settings. Mar 25 01:15:55.833941 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 25 01:15:55.833971 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 25 01:15:55.834009 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 25 01:15:55.834039 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 25 01:15:55.834071 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 25 01:15:55.834103 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 25 01:15:55.834133 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 25 01:15:55.834163 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 25 01:15:55.834194 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 25 01:15:55.834500 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 25 01:15:55.834541 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 25 01:15:55.834570 systemd[1]: Created slice user.slice - User and Session Slice. Mar 25 01:15:55.834599 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 01:15:55.834630 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 01:15:55.834694 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 25 01:15:55.834732 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 25 01:15:55.834819 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 25 01:15:55.834852 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 25 01:15:55.834884 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 25 01:15:55.834920 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 01:15:55.834949 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 25 01:15:55.834980 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 25 01:15:55.835009 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 25 01:15:55.835043 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 25 01:15:55.835074 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 01:15:55.835104 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 25 01:15:55.835134 systemd[1]: Reached target slices.target - Slice Units. Mar 25 01:15:55.835168 systemd[1]: Reached target swap.target - Swaps. Mar 25 01:15:55.835197 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 25 01:15:55.835328 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 25 01:15:55.835361 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 25 01:15:55.835394 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 25 01:15:55.835425 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 25 01:15:55.835454 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 01:15:55.835483 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 25 01:15:55.835511 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 25 01:15:55.835547 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 25 01:15:55.835579 systemd[1]: Mounting media.mount - External Media Directory... Mar 25 01:15:55.835607 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 25 01:15:55.835637 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 25 01:15:55.835667 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 25 01:15:55.835700 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 25 01:15:55.835729 systemd[1]: Reached target machines.target - Containers. Mar 25 01:15:55.835758 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 25 01:15:55.836527 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 01:15:55.836593 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 25 01:15:55.836622 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 25 01:15:55.836651 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 01:15:55.836680 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 25 01:15:55.836709 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 01:15:55.836737 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 25 01:15:55.836766 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 01:15:55.836795 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 25 01:15:55.836834 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 25 01:15:55.836865 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 25 01:15:55.836893 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 25 01:15:55.836922 systemd[1]: Stopped systemd-fsck-usr.service. Mar 25 01:15:55.836952 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 01:15:55.836984 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 25 01:15:55.837015 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 25 01:15:55.837044 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 25 01:15:55.837077 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 25 01:15:55.837109 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 25 01:15:55.837137 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 25 01:15:55.837167 systemd[1]: verity-setup.service: Deactivated successfully. Mar 25 01:15:55.837223 systemd[1]: Stopped verity-setup.service. Mar 25 01:15:55.837266 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 25 01:15:55.837295 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 25 01:15:55.837326 systemd[1]: Mounted media.mount - External Media Directory. Mar 25 01:15:55.837356 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 25 01:15:55.837387 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 25 01:15:55.837415 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 25 01:15:55.837444 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 01:15:55.837484 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 25 01:15:55.837514 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 25 01:15:55.837553 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 01:15:55.837590 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 01:15:55.837621 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 01:15:55.837650 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 01:15:55.837679 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 25 01:15:55.837713 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 25 01:15:55.837744 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 25 01:15:55.837773 kernel: loop: module loaded Mar 25 01:15:55.837805 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 25 01:15:55.837858 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 25 01:15:55.842722 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 25 01:15:55.842769 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 25 01:15:55.842803 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 25 01:15:55.842833 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 01:15:55.842871 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 25 01:15:55.842901 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 25 01:15:55.842932 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 25 01:15:55.842963 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 25 01:15:55.843000 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 25 01:15:55.843031 kernel: fuse: init (API version 7.39) Mar 25 01:15:55.843064 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 01:15:55.843096 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 01:15:55.843143 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 25 01:15:55.843177 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 25 01:15:55.843246 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 25 01:15:55.843283 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 25 01:15:55.843312 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 25 01:15:55.843351 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 25 01:15:55.843382 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 25 01:15:55.843413 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 25 01:15:55.843445 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 25 01:15:55.843475 kernel: ACPI: bus type drm_connector registered Mar 25 01:15:55.843509 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 25 01:15:55.843539 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 25 01:15:55.843574 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 25 01:15:55.843609 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 25 01:15:55.843639 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 25 01:15:55.843668 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 25 01:15:55.843697 kernel: loop0: detected capacity change from 0 to 189592 Mar 25 01:15:55.843726 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 25 01:15:55.843755 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 25 01:15:55.843847 systemd-journald[1494]: Collecting audit messages is disabled. Mar 25 01:15:55.843903 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:15:55.843946 systemd-journald[1494]: Journal started Mar 25 01:15:55.843994 systemd-journald[1494]: Runtime Journal (/run/log/journal/ec2c468f40434427fcc494f3f015ccda) is 8M, max 75.3M, 67.3M free. Mar 25 01:15:54.965814 systemd[1]: Queued start job for default target multi-user.target. Mar 25 01:15:54.979689 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 25 01:15:54.980578 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 25 01:15:55.852850 systemd[1]: Started systemd-journald.service - Journal Service. Mar 25 01:15:55.866839 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 25 01:15:55.944354 kernel: loop1: detected capacity change from 0 to 126448 Mar 25 01:15:55.938732 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 25 01:15:55.950777 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 25 01:15:55.985065 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 25 01:15:56.011641 systemd-journald[1494]: Time spent on flushing to /var/log/journal/ec2c468f40434427fcc494f3f015ccda is 64.606ms for 928 entries. Mar 25 01:15:56.011641 systemd-journald[1494]: System Journal (/var/log/journal/ec2c468f40434427fcc494f3f015ccda) is 8M, max 195.6M, 187.6M free. Mar 25 01:15:56.110819 systemd-journald[1494]: Received client request to flush runtime journal. Mar 25 01:15:56.110908 kernel: loop2: detected capacity change from 0 to 103832 Mar 25 01:15:56.013308 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 25 01:15:56.024078 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 25 01:15:56.085888 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 01:15:56.099621 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 25 01:15:56.115781 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 25 01:15:56.149765 systemd-tmpfiles[1559]: ACLs are not supported, ignoring. Mar 25 01:15:56.158079 kernel: loop3: detected capacity change from 0 to 54976 Mar 25 01:15:56.149804 systemd-tmpfiles[1559]: ACLs are not supported, ignoring. Mar 25 01:15:56.191047 udevadm[1563]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 25 01:15:56.196509 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 01:15:56.281255 kernel: loop4: detected capacity change from 0 to 189592 Mar 25 01:15:56.322263 kernel: loop5: detected capacity change from 0 to 126448 Mar 25 01:15:56.356289 kernel: loop6: detected capacity change from 0 to 103832 Mar 25 01:15:56.390268 kernel: loop7: detected capacity change from 0 to 54976 Mar 25 01:15:56.406461 (sd-merge)[1568]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 25 01:15:56.409635 (sd-merge)[1568]: Merged extensions into '/usr'. Mar 25 01:15:56.425821 systemd[1]: Reload requested from client PID 1523 ('systemd-sysext') (unit systemd-sysext.service)... Mar 25 01:15:56.426129 systemd[1]: Reloading... Mar 25 01:15:56.636653 zram_generator::config[1596]: No configuration found. Mar 25 01:15:56.798944 ldconfig[1516]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 25 01:15:56.905398 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:15:57.070234 systemd[1]: Reloading finished in 642 ms. Mar 25 01:15:57.094234 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 25 01:15:57.098896 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 25 01:15:57.103484 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 25 01:15:57.121491 systemd[1]: Starting ensure-sysext.service... Mar 25 01:15:57.128467 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 25 01:15:57.136737 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 01:15:57.166495 systemd[1]: Reload requested from client PID 1650 ('systemctl') (unit ensure-sysext.service)... Mar 25 01:15:57.166527 systemd[1]: Reloading... Mar 25 01:15:57.228069 systemd-tmpfiles[1651]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 25 01:15:57.228637 systemd-tmpfiles[1651]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 25 01:15:57.233713 systemd-tmpfiles[1651]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 25 01:15:57.235915 systemd-udevd[1652]: Using default interface naming scheme 'v255'. Mar 25 01:15:57.236406 systemd-tmpfiles[1651]: ACLs are not supported, ignoring. Mar 25 01:15:57.236545 systemd-tmpfiles[1651]: ACLs are not supported, ignoring. Mar 25 01:15:57.249103 systemd-tmpfiles[1651]: Detected autofs mount point /boot during canonicalization of boot. Mar 25 01:15:57.249131 systemd-tmpfiles[1651]: Skipping /boot Mar 25 01:15:57.294996 systemd-tmpfiles[1651]: Detected autofs mount point /boot during canonicalization of boot. Mar 25 01:15:57.295034 systemd-tmpfiles[1651]: Skipping /boot Mar 25 01:15:57.378249 zram_generator::config[1687]: No configuration found. Mar 25 01:15:57.569091 (udev-worker)[1719]: Network interface NamePolicy= disabled on kernel command line. Mar 25 01:15:57.670233 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1688) Mar 25 01:15:57.771521 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:15:57.947740 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 25 01:15:57.948859 systemd[1]: Reloading finished in 781 ms. Mar 25 01:15:57.966955 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 01:15:57.991559 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 01:15:58.108960 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 25 01:15:58.121713 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 25 01:15:58.132109 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 25 01:15:58.139039 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 25 01:15:58.143757 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 01:15:58.148819 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 25 01:15:58.154066 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 01:15:58.165782 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 01:15:58.171734 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 01:15:58.176647 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 01:15:58.179914 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 25 01:15:58.183462 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 01:15:58.188816 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 25 01:15:58.204285 lvm[1850]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 25 01:15:58.203080 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 25 01:15:58.232846 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 25 01:15:58.240338 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 25 01:15:58.259110 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:15:58.276304 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 25 01:15:58.282905 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 01:15:58.283378 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 01:15:58.289597 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 01:15:58.291329 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 01:15:58.306691 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 01:15:58.307688 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 01:15:58.325640 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 25 01:15:58.347551 systemd[1]: Finished ensure-sysext.service. Mar 25 01:15:58.351913 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 25 01:15:58.356504 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 01:15:58.361816 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 25 01:15:58.370885 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 01:15:58.381337 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 25 01:15:58.395487 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 01:15:58.398321 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 01:15:58.398393 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 01:15:58.398472 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 25 01:15:58.398541 systemd[1]: Reached target time-set.target - System Time Set. Mar 25 01:15:58.407601 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 25 01:15:58.416325 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 25 01:15:58.437730 lvm[1877]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 25 01:15:58.443672 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 25 01:15:58.466393 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 25 01:15:58.478490 augenrules[1896]: No rules Mar 25 01:15:58.483065 systemd[1]: audit-rules.service: Deactivated successfully. Mar 25 01:15:58.486435 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 25 01:15:58.489817 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 01:15:58.490959 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 01:15:58.497027 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 25 01:15:58.497673 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 25 01:15:58.501753 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 01:15:58.502166 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 01:15:58.519059 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 25 01:15:58.519949 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 25 01:15:58.527800 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 25 01:15:58.539344 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 25 01:15:58.544629 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 25 01:15:58.601003 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 25 01:15:58.620370 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:15:58.724714 systemd-networkd[1856]: lo: Link UP Mar 25 01:15:58.725242 systemd-networkd[1856]: lo: Gained carrier Mar 25 01:15:58.728422 systemd-networkd[1856]: Enumeration completed Mar 25 01:15:58.728755 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 25 01:15:58.729871 systemd-networkd[1856]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:15:58.730504 systemd-networkd[1856]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 25 01:15:58.735217 systemd-networkd[1856]: eth0: Link UP Mar 25 01:15:58.735419 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 25 01:15:58.740569 systemd-networkd[1856]: eth0: Gained carrier Mar 25 01:15:58.740622 systemd-networkd[1856]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:15:58.745601 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 25 01:15:58.754043 systemd-resolved[1861]: Positive Trust Anchors: Mar 25 01:15:58.754088 systemd-resolved[1861]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 25 01:15:58.754151 systemd-resolved[1861]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 25 01:15:58.757333 systemd-networkd[1856]: eth0: DHCPv4 address 172.31.23.121/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 25 01:15:58.764438 systemd-resolved[1861]: Defaulting to hostname 'linux'. Mar 25 01:15:58.768783 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 25 01:15:58.772258 systemd[1]: Reached target network.target - Network. Mar 25 01:15:58.774700 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 25 01:15:58.778003 systemd[1]: Reached target sysinit.target - System Initialization. Mar 25 01:15:58.780614 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 25 01:15:58.783453 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 25 01:15:58.786551 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 25 01:15:58.789087 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 25 01:15:58.791930 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 25 01:15:58.794745 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 25 01:15:58.794798 systemd[1]: Reached target paths.target - Path Units. Mar 25 01:15:58.796757 systemd[1]: Reached target timers.target - Timer Units. Mar 25 01:15:58.799548 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 25 01:15:58.806225 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 25 01:15:58.816165 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 25 01:15:58.819720 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 25 01:15:58.822810 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 25 01:15:58.833328 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 25 01:15:58.837611 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 25 01:15:58.842496 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 25 01:15:58.846687 systemd[1]: Reached target sockets.target - Socket Units. Mar 25 01:15:58.849457 systemd[1]: Reached target basic.target - Basic System. Mar 25 01:15:58.852066 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 25 01:15:58.852119 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 25 01:15:58.856379 systemd[1]: Starting containerd.service - containerd container runtime... Mar 25 01:15:58.864315 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 25 01:15:58.870639 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 25 01:15:58.880336 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 25 01:15:58.887872 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 25 01:15:58.890341 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 25 01:15:58.893985 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 25 01:15:58.906820 systemd[1]: Started ntpd.service - Network Time Service. Mar 25 01:15:58.915252 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 25 01:15:58.924983 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 25 01:15:58.935569 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 25 01:15:58.951280 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 25 01:15:58.966716 jq[1926]: false Mar 25 01:15:58.973677 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 25 01:15:58.977912 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 25 01:15:58.980431 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 25 01:15:58.987370 systemd[1]: Starting update-engine.service - Update Engine... Mar 25 01:15:58.993560 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 25 01:15:58.999155 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 25 01:15:59.014008 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 25 01:15:59.015359 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 25 01:15:59.046457 jq[1939]: true Mar 25 01:15:59.060556 dbus-daemon[1925]: [system] SELinux support is enabled Mar 25 01:15:59.074649 dbus-daemon[1925]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1856 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 25 01:15:59.076682 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 25 01:15:59.089646 (ntainerd)[1952]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 25 01:15:59.089746 systemd[1]: motdgen.service: Deactivated successfully. Mar 25 01:15:59.091294 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 25 01:15:59.115177 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 25 01:15:59.118865 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 25 01:15:59.151576 dbus-daemon[1925]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 25 01:15:59.152101 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 25 01:15:59.152185 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 25 01:15:59.157443 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 25 01:15:59.157500 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 25 01:15:59.174329 extend-filesystems[1927]: Found loop4 Mar 25 01:15:59.174329 extend-filesystems[1927]: Found loop5 Mar 25 01:15:59.174329 extend-filesystems[1927]: Found loop6 Mar 25 01:15:59.174329 extend-filesystems[1927]: Found loop7 Mar 25 01:15:59.174329 extend-filesystems[1927]: Found nvme0n1 Mar 25 01:15:59.174329 extend-filesystems[1927]: Found nvme0n1p1 Mar 25 01:15:59.174329 extend-filesystems[1927]: Found nvme0n1p2 Mar 25 01:15:59.174329 extend-filesystems[1927]: Found nvme0n1p3 Mar 25 01:15:59.174329 extend-filesystems[1927]: Found usr Mar 25 01:15:59.174329 extend-filesystems[1927]: Found nvme0n1p4 Mar 25 01:15:59.174329 extend-filesystems[1927]: Found nvme0n1p6 Mar 25 01:15:59.174329 extend-filesystems[1927]: Found nvme0n1p7 Mar 25 01:15:59.174329 extend-filesystems[1927]: Found nvme0n1p9 Mar 25 01:15:59.174329 extend-filesystems[1927]: Checking size of /dev/nvme0n1p9 Mar 25 01:15:59.182645 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 25 01:15:59.229001 jq[1951]: true Mar 25 01:15:59.243489 tar[1943]: linux-arm64/helm Mar 25 01:15:59.245234 coreos-metadata[1924]: Mar 25 01:15:59.244 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 25 01:15:59.252165 ntpd[1929]: ntpd 4.2.8p17@1.4004-o Mon Mar 24 23:09:33 UTC 2025 (1): Starting Mar 25 01:15:59.257754 coreos-metadata[1924]: Mar 25 01:15:59.256 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 25 01:15:59.257821 ntpd[1929]: 25 Mar 01:15:59 ntpd[1929]: ntpd 4.2.8p17@1.4004-o Mon Mar 24 23:09:33 UTC 2025 (1): Starting Mar 25 01:15:59.257821 ntpd[1929]: 25 Mar 01:15:59 ntpd[1929]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 25 01:15:59.257821 ntpd[1929]: 25 Mar 01:15:59 ntpd[1929]: ---------------------------------------------------- Mar 25 01:15:59.257821 ntpd[1929]: 25 Mar 01:15:59 ntpd[1929]: ntp-4 is maintained by Network Time Foundation, Mar 25 01:15:59.257821 ntpd[1929]: 25 Mar 01:15:59 ntpd[1929]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 25 01:15:59.257821 ntpd[1929]: 25 Mar 01:15:59 ntpd[1929]: corporation. Support and training for ntp-4 are Mar 25 01:15:59.257821 ntpd[1929]: 25 Mar 01:15:59 ntpd[1929]: available at https://www.nwtime.org/support Mar 25 01:15:59.257821 ntpd[1929]: 25 Mar 01:15:59 ntpd[1929]: ---------------------------------------------------- Mar 25 01:15:59.252262 ntpd[1929]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 25 01:15:59.278725 coreos-metadata[1924]: Mar 25 01:15:59.258 INFO Fetch successful Mar 25 01:15:59.278725 coreos-metadata[1924]: Mar 25 01:15:59.258 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 25 01:15:59.278725 coreos-metadata[1924]: Mar 25 01:15:59.265 INFO Fetch successful Mar 25 01:15:59.278725 coreos-metadata[1924]: Mar 25 01:15:59.265 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 25 01:15:59.278725 coreos-metadata[1924]: Mar 25 01:15:59.266 INFO Fetch successful Mar 25 01:15:59.278725 coreos-metadata[1924]: Mar 25 01:15:59.266 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 25 01:15:59.278725 coreos-metadata[1924]: Mar 25 01:15:59.268 INFO Fetch successful Mar 25 01:15:59.278725 coreos-metadata[1924]: Mar 25 01:15:59.268 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 25 01:15:59.278725 coreos-metadata[1924]: Mar 25 01:15:59.275 INFO Fetch failed with 404: resource not found Mar 25 01:15:59.278725 coreos-metadata[1924]: Mar 25 01:15:59.275 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 25 01:15:59.278725 coreos-metadata[1924]: Mar 25 01:15:59.276 INFO Fetch successful Mar 25 01:15:59.278725 coreos-metadata[1924]: Mar 25 01:15:59.276 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 25 01:15:59.278725 coreos-metadata[1924]: Mar 25 01:15:59.277 INFO Fetch successful Mar 25 01:15:59.278725 coreos-metadata[1924]: Mar 25 01:15:59.277 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 25 01:15:59.279618 update_engine[1938]: I20250325 01:15:59.276921 1938 main.cc:92] Flatcar Update Engine starting Mar 25 01:15:59.280107 ntpd[1929]: 25 Mar 01:15:59 ntpd[1929]: proto: precision = 0.096 usec (-23) Mar 25 01:15:59.280107 ntpd[1929]: 25 Mar 01:15:59 ntpd[1929]: basedate set to 2025-03-12 Mar 25 01:15:59.280107 ntpd[1929]: 25 Mar 01:15:59 ntpd[1929]: gps base set to 2025-03-16 (week 2358) Mar 25 01:15:59.252284 ntpd[1929]: ---------------------------------------------------- Mar 25 01:15:59.304392 ntpd[1929]: 25 Mar 01:15:59 ntpd[1929]: Listen and drop on 0 v6wildcard [::]:123 Mar 25 01:15:59.304392 ntpd[1929]: 25 Mar 01:15:59 ntpd[1929]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 25 01:15:59.304392 ntpd[1929]: 25 Mar 01:15:59 ntpd[1929]: Listen normally on 2 lo 127.0.0.1:123 Mar 25 01:15:59.304392 ntpd[1929]: 25 Mar 01:15:59 ntpd[1929]: Listen normally on 3 eth0 172.31.23.121:123 Mar 25 01:15:59.304392 ntpd[1929]: 25 Mar 01:15:59 ntpd[1929]: Listen normally on 4 lo [::1]:123 Mar 25 01:15:59.304392 ntpd[1929]: 25 Mar 01:15:59 ntpd[1929]: bind(21) AF_INET6 fe80::4aa:28ff:fea9:b5cb%2#123 flags 0x11 failed: Cannot assign requested address Mar 25 01:15:59.304392 ntpd[1929]: 25 Mar 01:15:59 ntpd[1929]: unable to create socket on eth0 (5) for fe80::4aa:28ff:fea9:b5cb%2#123 Mar 25 01:15:59.304392 ntpd[1929]: 25 Mar 01:15:59 ntpd[1929]: failed to init interface for address fe80::4aa:28ff:fea9:b5cb%2 Mar 25 01:15:59.304392 ntpd[1929]: 25 Mar 01:15:59 ntpd[1929]: Listening on routing socket on fd #21 for interface updates Mar 25 01:15:59.304843 coreos-metadata[1924]: Mar 25 01:15:59.283 INFO Fetch successful Mar 25 01:15:59.304843 coreos-metadata[1924]: Mar 25 01:15:59.283 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 25 01:15:59.304843 coreos-metadata[1924]: Mar 25 01:15:59.290 INFO Fetch successful Mar 25 01:15:59.304843 coreos-metadata[1924]: Mar 25 01:15:59.290 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 25 01:15:59.304843 coreos-metadata[1924]: Mar 25 01:15:59.294 INFO Fetch successful Mar 25 01:15:59.299955 systemd[1]: Started update-engine.service - Update Engine. Mar 25 01:15:59.252303 ntpd[1929]: ntp-4 is maintained by Network Time Foundation, Mar 25 01:15:59.252321 ntpd[1929]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 25 01:15:59.252338 ntpd[1929]: corporation. Support and training for ntp-4 are Mar 25 01:15:59.252356 ntpd[1929]: available at https://www.nwtime.org/support Mar 25 01:15:59.252374 ntpd[1929]: ---------------------------------------------------- Mar 25 01:15:59.268374 ntpd[1929]: proto: precision = 0.096 usec (-23) Mar 25 01:15:59.273856 ntpd[1929]: basedate set to 2025-03-12 Mar 25 01:15:59.273897 ntpd[1929]: gps base set to 2025-03-16 (week 2358) Mar 25 01:15:59.289437 ntpd[1929]: Listen and drop on 0 v6wildcard [::]:123 Mar 25 01:15:59.289525 ntpd[1929]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 25 01:15:59.289810 ntpd[1929]: Listen normally on 2 lo 127.0.0.1:123 Mar 25 01:15:59.321109 update_engine[1938]: I20250325 01:15:59.306499 1938 update_check_scheduler.cc:74] Next update check in 2m36s Mar 25 01:15:59.321192 extend-filesystems[1927]: Resized partition /dev/nvme0n1p9 Mar 25 01:15:59.289879 ntpd[1929]: Listen normally on 3 eth0 172.31.23.121:123 Mar 25 01:15:59.352599 ntpd[1929]: 25 Mar 01:15:59 ntpd[1929]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 25 01:15:59.352599 ntpd[1929]: 25 Mar 01:15:59 ntpd[1929]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 25 01:15:59.352688 extend-filesystems[1977]: resize2fs 1.47.2 (1-Jan-2025) Mar 25 01:15:59.366424 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Mar 25 01:15:59.289953 ntpd[1929]: Listen normally on 4 lo [::1]:123 Mar 25 01:15:59.290034 ntpd[1929]: bind(21) AF_INET6 fe80::4aa:28ff:fea9:b5cb%2#123 flags 0x11 failed: Cannot assign requested address Mar 25 01:15:59.290074 ntpd[1929]: unable to create socket on eth0 (5) for fe80::4aa:28ff:fea9:b5cb%2#123 Mar 25 01:15:59.290102 ntpd[1929]: failed to init interface for address fe80::4aa:28ff:fea9:b5cb%2 Mar 25 01:15:59.290161 ntpd[1929]: Listening on routing socket on fd #21 for interface updates Mar 25 01:15:59.327665 ntpd[1929]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 25 01:15:59.327719 ntpd[1929]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 25 01:15:59.384503 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 25 01:15:59.497668 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Mar 25 01:15:59.515331 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 25 01:15:59.520494 extend-filesystems[1977]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 25 01:15:59.520494 extend-filesystems[1977]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 25 01:15:59.520494 extend-filesystems[1977]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Mar 25 01:15:59.542907 extend-filesystems[1927]: Resized filesystem in /dev/nvme0n1p9 Mar 25 01:15:59.548119 bash[2006]: Updated "/home/core/.ssh/authorized_keys" Mar 25 01:15:59.528821 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 25 01:15:59.529266 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 25 01:15:59.543945 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 25 01:15:59.571966 systemd[1]: Starting sshkeys.service... Mar 25 01:15:59.581960 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 25 01:15:59.589877 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 25 01:15:59.682876 systemd-logind[1936]: Watching system buttons on /dev/input/event0 (Power Button) Mar 25 01:15:59.682935 systemd-logind[1936]: Watching system buttons on /dev/input/event1 (Sleep Button) Mar 25 01:15:59.692704 systemd-logind[1936]: New seat seat0. Mar 25 01:15:59.709250 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1691) Mar 25 01:15:59.709288 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 25 01:15:59.731262 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 25 01:15:59.736270 systemd[1]: Started systemd-logind.service - User Login Management. Mar 25 01:15:59.767952 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 25 01:15:59.768637 dbus-daemon[1925]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 25 01:15:59.771076 dbus-daemon[1925]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1965 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 25 01:15:59.780817 systemd[1]: Starting polkit.service - Authorization Manager... Mar 25 01:15:59.805436 containerd[1952]: time="2025-03-25T01:15:59Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 25 01:15:59.814072 containerd[1952]: time="2025-03-25T01:15:59.812639915Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 Mar 25 01:15:59.836896 polkitd[2018]: Started polkitd version 121 Mar 25 01:15:59.855584 polkitd[2018]: Loading rules from directory /etc/polkit-1/rules.d Mar 25 01:15:59.855715 polkitd[2018]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 25 01:15:59.858028 polkitd[2018]: Finished loading, compiling and executing 2 rules Mar 25 01:15:59.859964 dbus-daemon[1925]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 25 01:15:59.860259 systemd[1]: Started polkit.service - Authorization Manager. Mar 25 01:15:59.861912 polkitd[2018]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 25 01:15:59.900546 containerd[1952]: time="2025-03-25T01:15:59.900440639Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.296µs" Mar 25 01:15:59.900546 containerd[1952]: time="2025-03-25T01:15:59.900531515Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 25 01:15:59.900730 containerd[1952]: time="2025-03-25T01:15:59.900573671Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 25 01:15:59.900916 containerd[1952]: time="2025-03-25T01:15:59.900870263Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 25 01:15:59.900975 containerd[1952]: time="2025-03-25T01:15:59.900916859Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 25 01:15:59.901021 containerd[1952]: time="2025-03-25T01:15:59.900973511Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 25 01:15:59.901132 containerd[1952]: time="2025-03-25T01:15:59.901089515Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 25 01:15:59.901192 containerd[1952]: time="2025-03-25T01:15:59.901125215Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 25 01:15:59.910078 containerd[1952]: time="2025-03-25T01:15:59.909172427Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 25 01:15:59.910078 containerd[1952]: time="2025-03-25T01:15:59.909254579Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 25 01:15:59.910078 containerd[1952]: time="2025-03-25T01:15:59.909288851Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 25 01:15:59.910078 containerd[1952]: time="2025-03-25T01:15:59.909311795Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 25 01:15:59.910078 containerd[1952]: time="2025-03-25T01:15:59.909498071Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 25 01:15:59.910078 containerd[1952]: time="2025-03-25T01:15:59.909900671Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 25 01:15:59.910078 containerd[1952]: time="2025-03-25T01:15:59.909967943Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 25 01:15:59.910078 containerd[1952]: time="2025-03-25T01:15:59.909993095Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 25 01:15:59.910078 containerd[1952]: time="2025-03-25T01:15:59.910058903Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 25 01:15:59.910555 containerd[1952]: time="2025-03-25T01:15:59.910509563Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 25 01:15:59.910699 containerd[1952]: time="2025-03-25T01:15:59.910631111Z" level=info msg="metadata content store policy set" policy=shared Mar 25 01:15:59.916460 systemd-hostnamed[1965]: Hostname set to (transient) Mar 25 01:15:59.916461 systemd-resolved[1861]: System hostname changed to 'ip-172-31-23-121'. Mar 25 01:15:59.922270 containerd[1952]: time="2025-03-25T01:15:59.919994411Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 25 01:15:59.922270 containerd[1952]: time="2025-03-25T01:15:59.920084867Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 25 01:15:59.922270 containerd[1952]: time="2025-03-25T01:15:59.920116799Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 25 01:15:59.922270 containerd[1952]: time="2025-03-25T01:15:59.920146127Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 25 01:15:59.922270 containerd[1952]: time="2025-03-25T01:15:59.920174531Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 25 01:15:59.922270 containerd[1952]: time="2025-03-25T01:15:59.920232527Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 25 01:15:59.922270 containerd[1952]: time="2025-03-25T01:15:59.920266859Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 25 01:15:59.922270 containerd[1952]: time="2025-03-25T01:15:59.920299259Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 25 01:15:59.922270 containerd[1952]: time="2025-03-25T01:15:59.920331983Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 25 01:15:59.922270 containerd[1952]: time="2025-03-25T01:15:59.920360267Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 25 01:15:59.922270 containerd[1952]: time="2025-03-25T01:15:59.920385203Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 25 01:15:59.922270 containerd[1952]: time="2025-03-25T01:15:59.920418839Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 25 01:15:59.922270 containerd[1952]: time="2025-03-25T01:15:59.920648063Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 25 01:15:59.922270 containerd[1952]: time="2025-03-25T01:15:59.920696207Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 25 01:15:59.922861 containerd[1952]: time="2025-03-25T01:15:59.920737187Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 25 01:15:59.922861 containerd[1952]: time="2025-03-25T01:15:59.920766683Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 25 01:15:59.922861 containerd[1952]: time="2025-03-25T01:15:59.920793287Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 25 01:15:59.922861 containerd[1952]: time="2025-03-25T01:15:59.920820599Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 25 01:15:59.922861 containerd[1952]: time="2025-03-25T01:15:59.920850287Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 25 01:15:59.922861 containerd[1952]: time="2025-03-25T01:15:59.920876483Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 25 01:15:59.922861 containerd[1952]: time="2025-03-25T01:15:59.920904827Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 25 01:15:59.922861 containerd[1952]: time="2025-03-25T01:15:59.920934407Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 25 01:15:59.922861 containerd[1952]: time="2025-03-25T01:15:59.920962127Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 25 01:15:59.922861 containerd[1952]: time="2025-03-25T01:15:59.921144011Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 25 01:15:59.922861 containerd[1952]: time="2025-03-25T01:15:59.921173735Z" level=info msg="Start snapshots syncer" Mar 25 01:15:59.922861 containerd[1952]: time="2025-03-25T01:15:59.921248639Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 25 01:15:59.923358 containerd[1952]: time="2025-03-25T01:15:59.921660575Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 25 01:15:59.923358 containerd[1952]: time="2025-03-25T01:15:59.921743075Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 25 01:15:59.923595 containerd[1952]: time="2025-03-25T01:15:59.922054655Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 25 01:15:59.923595 containerd[1952]: time="2025-03-25T01:15:59.923250431Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 25 01:15:59.923595 containerd[1952]: time="2025-03-25T01:15:59.923311343Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 25 01:15:59.923595 containerd[1952]: time="2025-03-25T01:15:59.923347367Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 25 01:15:59.923595 containerd[1952]: time="2025-03-25T01:15:59.923375195Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 25 01:15:59.923595 containerd[1952]: time="2025-03-25T01:15:59.923406287Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 25 01:15:59.923595 containerd[1952]: time="2025-03-25T01:15:59.923434283Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 25 01:15:59.923595 containerd[1952]: time="2025-03-25T01:15:59.923462327Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 25 01:15:59.923595 containerd[1952]: time="2025-03-25T01:15:59.923518427Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 25 01:15:59.923595 containerd[1952]: time="2025-03-25T01:15:59.923549579Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 25 01:15:59.923595 containerd[1952]: time="2025-03-25T01:15:59.923574815Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 25 01:15:59.929461 containerd[1952]: time="2025-03-25T01:15:59.923681015Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 25 01:15:59.929461 containerd[1952]: time="2025-03-25T01:15:59.923716787Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 25 01:15:59.929461 containerd[1952]: time="2025-03-25T01:15:59.923740127Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 25 01:15:59.929461 containerd[1952]: time="2025-03-25T01:15:59.923765483Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 25 01:15:59.929461 containerd[1952]: time="2025-03-25T01:15:59.923800943Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 25 01:15:59.929461 containerd[1952]: time="2025-03-25T01:15:59.923832335Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 25 01:15:59.929461 containerd[1952]: time="2025-03-25T01:15:59.923859899Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 25 01:15:59.929461 containerd[1952]: time="2025-03-25T01:15:59.923915519Z" level=info msg="runtime interface created" Mar 25 01:15:59.929461 containerd[1952]: time="2025-03-25T01:15:59.923930387Z" level=info msg="created NRI interface" Mar 25 01:15:59.929461 containerd[1952]: time="2025-03-25T01:15:59.923951747Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 25 01:15:59.929461 containerd[1952]: time="2025-03-25T01:15:59.923981831Z" level=info msg="Connect containerd service" Mar 25 01:15:59.929461 containerd[1952]: time="2025-03-25T01:15:59.924054215Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 25 01:15:59.929461 containerd[1952]: time="2025-03-25T01:15:59.926710115Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 25 01:15:59.997806 coreos-metadata[2016]: Mar 25 01:15:59.992 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 25 01:15:59.997806 coreos-metadata[2016]: Mar 25 01:15:59.993 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 25 01:15:59.997806 coreos-metadata[2016]: Mar 25 01:15:59.994 INFO Fetch successful Mar 25 01:15:59.997806 coreos-metadata[2016]: Mar 25 01:15:59.994 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 25 01:15:59.997806 coreos-metadata[2016]: Mar 25 01:15:59.995 INFO Fetch successful Mar 25 01:16:00.000438 unknown[2016]: wrote ssh authorized keys file for user: core Mar 25 01:16:00.076855 update-ssh-keys[2076]: Updated "/home/core/.ssh/authorized_keys" Mar 25 01:16:00.078983 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 25 01:16:00.093768 systemd[1]: Finished sshkeys.service. Mar 25 01:16:00.245383 systemd-networkd[1856]: eth0: Gained IPv6LL Mar 25 01:16:00.281651 locksmithd[1975]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 25 01:16:00.283568 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 25 01:16:00.317261 containerd[1952]: time="2025-03-25T01:16:00.315619485Z" level=info msg="Start subscribing containerd event" Mar 25 01:16:00.317261 containerd[1952]: time="2025-03-25T01:16:00.315720837Z" level=info msg="Start recovering state" Mar 25 01:16:00.317261 containerd[1952]: time="2025-03-25T01:16:00.315870189Z" level=info msg="Start event monitor" Mar 25 01:16:00.317261 containerd[1952]: time="2025-03-25T01:16:00.315897645Z" level=info msg="Start cni network conf syncer for default" Mar 25 01:16:00.317261 containerd[1952]: time="2025-03-25T01:16:00.315931149Z" level=info msg="Start streaming server" Mar 25 01:16:00.317261 containerd[1952]: time="2025-03-25T01:16:00.315952353Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 25 01:16:00.317261 containerd[1952]: time="2025-03-25T01:16:00.315969345Z" level=info msg="runtime interface starting up..." Mar 25 01:16:00.317261 containerd[1952]: time="2025-03-25T01:16:00.315984117Z" level=info msg="starting plugins..." Mar 25 01:16:00.317261 containerd[1952]: time="2025-03-25T01:16:00.316012521Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 25 01:16:00.330654 containerd[1952]: time="2025-03-25T01:16:00.329124969Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 25 01:16:00.330654 containerd[1952]: time="2025-03-25T01:16:00.329295969Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 25 01:16:00.330654 containerd[1952]: time="2025-03-25T01:16:00.329393277Z" level=info msg="containerd successfully booted in 0.529665s" Mar 25 01:16:00.334601 systemd[1]: Started containerd.service - containerd container runtime. Mar 25 01:16:00.340667 systemd[1]: Reached target network-online.target - Network is Online. Mar 25 01:16:00.349801 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 25 01:16:00.357650 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:16:00.366555 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 25 01:16:00.508069 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 25 01:16:00.552825 amazon-ssm-agent[2141]: Initializing new seelog logger Mar 25 01:16:00.552825 amazon-ssm-agent[2141]: New Seelog Logger Creation Complete Mar 25 01:16:00.552825 amazon-ssm-agent[2141]: 2025/03/25 01:16:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 25 01:16:00.552825 amazon-ssm-agent[2141]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 25 01:16:00.553414 amazon-ssm-agent[2141]: 2025/03/25 01:16:00 processing appconfig overrides Mar 25 01:16:00.554748 amazon-ssm-agent[2141]: 2025/03/25 01:16:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 25 01:16:00.554748 amazon-ssm-agent[2141]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 25 01:16:00.554866 amazon-ssm-agent[2141]: 2025/03/25 01:16:00 processing appconfig overrides Mar 25 01:16:00.556331 amazon-ssm-agent[2141]: 2025/03/25 01:16:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 25 01:16:00.556331 amazon-ssm-agent[2141]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 25 01:16:00.556331 amazon-ssm-agent[2141]: 2025/03/25 01:16:00 processing appconfig overrides Mar 25 01:16:00.556331 amazon-ssm-agent[2141]: 2025-03-25 01:16:00 INFO Proxy environment variables: Mar 25 01:16:00.569998 amazon-ssm-agent[2141]: 2025/03/25 01:16:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 25 01:16:00.569998 amazon-ssm-agent[2141]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 25 01:16:00.569998 amazon-ssm-agent[2141]: 2025/03/25 01:16:00 processing appconfig overrides Mar 25 01:16:00.656507 amazon-ssm-agent[2141]: 2025-03-25 01:16:00 INFO https_proxy: Mar 25 01:16:00.756918 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 25 01:16:00.763422 amazon-ssm-agent[2141]: 2025-03-25 01:16:00 INFO http_proxy: Mar 25 01:16:00.861539 amazon-ssm-agent[2141]: 2025-03-25 01:16:00 INFO no_proxy: Mar 25 01:16:00.961361 amazon-ssm-agent[2141]: 2025-03-25 01:16:00 INFO Checking if agent identity type OnPrem can be assumed Mar 25 01:16:01.059520 amazon-ssm-agent[2141]: 2025-03-25 01:16:00 INFO Checking if agent identity type EC2 can be assumed Mar 25 01:16:01.150446 sshd_keygen[1958]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 25 01:16:01.158262 amazon-ssm-agent[2141]: 2025-03-25 01:16:00 INFO Agent will take identity from EC2 Mar 25 01:16:01.206690 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 25 01:16:01.220784 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 25 01:16:01.229893 systemd[1]: Started sshd@0-172.31.23.121:22-147.75.109.163:49232.service - OpenSSH per-connection server daemon (147.75.109.163:49232). Mar 25 01:16:01.246710 amazon-ssm-agent[2141]: 2025-03-25 01:16:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 25 01:16:01.246710 amazon-ssm-agent[2141]: 2025-03-25 01:16:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 25 01:16:01.246874 amazon-ssm-agent[2141]: 2025-03-25 01:16:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 25 01:16:01.246874 amazon-ssm-agent[2141]: 2025-03-25 01:16:00 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Mar 25 01:16:01.246874 amazon-ssm-agent[2141]: 2025-03-25 01:16:00 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Mar 25 01:16:01.246874 amazon-ssm-agent[2141]: 2025-03-25 01:16:00 INFO [amazon-ssm-agent] Starting Core Agent Mar 25 01:16:01.247025 amazon-ssm-agent[2141]: 2025-03-25 01:16:00 INFO [amazon-ssm-agent] registrar detected. Attempting registration Mar 25 01:16:01.247025 amazon-ssm-agent[2141]: 2025-03-25 01:16:00 INFO [Registrar] Starting registrar module Mar 25 01:16:01.247132 amazon-ssm-agent[2141]: 2025-03-25 01:16:00 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Mar 25 01:16:01.247132 amazon-ssm-agent[2141]: 2025-03-25 01:16:01 INFO [EC2Identity] EC2 registration was successful. Mar 25 01:16:01.247132 amazon-ssm-agent[2141]: 2025-03-25 01:16:01 INFO [CredentialRefresher] credentialRefresher has started Mar 25 01:16:01.247132 amazon-ssm-agent[2141]: 2025-03-25 01:16:01 INFO [CredentialRefresher] Starting credentials refresher loop Mar 25 01:16:01.247132 amazon-ssm-agent[2141]: 2025-03-25 01:16:01 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 25 01:16:01.257510 amazon-ssm-agent[2141]: 2025-03-25 01:16:01 INFO [CredentialRefresher] Next credential rotation will be in 31.2499862837 minutes Mar 25 01:16:01.264656 systemd[1]: issuegen.service: Deactivated successfully. Mar 25 01:16:01.265154 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 25 01:16:01.279520 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 25 01:16:01.287864 tar[1943]: linux-arm64/LICENSE Mar 25 01:16:01.287864 tar[1943]: linux-arm64/README.md Mar 25 01:16:01.333440 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 25 01:16:01.344277 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 25 01:16:01.353094 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 25 01:16:01.361088 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 25 01:16:01.365981 systemd[1]: Reached target getty.target - Login Prompts. Mar 25 01:16:01.490630 sshd[2170]: Accepted publickey for core from 147.75.109.163 port 49232 ssh2: RSA SHA256:PeYl6nTqnDkQzDdjcMK19FTRt4hKUhyBe4JyMaX2oCc Mar 25 01:16:01.493587 sshd-session[2170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:16:01.508481 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 25 01:16:01.515426 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 25 01:16:01.542559 systemd-logind[1936]: New session 1 of user core. Mar 25 01:16:01.560471 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 25 01:16:01.570880 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 25 01:16:01.594985 (systemd)[2184]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 25 01:16:01.601070 systemd-logind[1936]: New session c1 of user core. Mar 25 01:16:01.892357 systemd[2184]: Queued start job for default target default.target. Mar 25 01:16:01.902410 systemd[2184]: Created slice app.slice - User Application Slice. Mar 25 01:16:01.902461 systemd[2184]: Reached target paths.target - Paths. Mar 25 01:16:01.902549 systemd[2184]: Reached target timers.target - Timers. Mar 25 01:16:01.905058 systemd[2184]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 25 01:16:01.945476 systemd[2184]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 25 01:16:01.945735 systemd[2184]: Reached target sockets.target - Sockets. Mar 25 01:16:01.945833 systemd[2184]: Reached target basic.target - Basic System. Mar 25 01:16:01.945914 systemd[2184]: Reached target default.target - Main User Target. Mar 25 01:16:01.945972 systemd[2184]: Startup finished in 329ms. Mar 25 01:16:01.947722 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 25 01:16:01.959497 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 25 01:16:02.115479 systemd[1]: Started sshd@1-172.31.23.121:22-147.75.109.163:49242.service - OpenSSH per-connection server daemon (147.75.109.163:49242). Mar 25 01:16:02.230977 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:16:02.236015 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 25 01:16:02.240390 systemd[1]: Startup finished in 1.069s (kernel) + 8.090s (initrd) + 8.417s (userspace) = 17.578s. Mar 25 01:16:02.247654 (kubelet)[2201]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:16:02.253645 ntpd[1929]: Listen normally on 6 eth0 [fe80::4aa:28ff:fea9:b5cb%2]:123 Mar 25 01:16:02.254872 ntpd[1929]: 25 Mar 01:16:02 ntpd[1929]: Listen normally on 6 eth0 [fe80::4aa:28ff:fea9:b5cb%2]:123 Mar 25 01:16:02.310635 amazon-ssm-agent[2141]: 2025-03-25 01:16:02 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 25 01:16:02.338297 sshd[2195]: Accepted publickey for core from 147.75.109.163 port 49242 ssh2: RSA SHA256:PeYl6nTqnDkQzDdjcMK19FTRt4hKUhyBe4JyMaX2oCc Mar 25 01:16:02.339993 sshd-session[2195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:16:02.356557 systemd-logind[1936]: New session 2 of user core. Mar 25 01:16:02.367529 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 25 01:16:02.413774 amazon-ssm-agent[2141]: 2025-03-25 01:16:02 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2209) started Mar 25 01:16:02.508877 sshd[2213]: Connection closed by 147.75.109.163 port 49242 Mar 25 01:16:02.510391 sshd-session[2195]: pam_unix(sshd:session): session closed for user core Mar 25 01:16:02.515470 amazon-ssm-agent[2141]: 2025-03-25 01:16:02 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 25 01:16:02.517371 systemd[1]: sshd@1-172.31.23.121:22-147.75.109.163:49242.service: Deactivated successfully. Mar 25 01:16:02.522569 systemd[1]: session-2.scope: Deactivated successfully. Mar 25 01:16:02.526064 systemd-logind[1936]: Session 2 logged out. Waiting for processes to exit. Mar 25 01:16:02.530043 systemd-logind[1936]: Removed session 2. Mar 25 01:16:02.552683 systemd[1]: Started sshd@2-172.31.23.121:22-147.75.109.163:49248.service - OpenSSH per-connection server daemon (147.75.109.163:49248). Mar 25 01:16:02.749080 sshd[2229]: Accepted publickey for core from 147.75.109.163 port 49248 ssh2: RSA SHA256:PeYl6nTqnDkQzDdjcMK19FTRt4hKUhyBe4JyMaX2oCc Mar 25 01:16:02.751449 sshd-session[2229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:16:02.761296 systemd-logind[1936]: New session 3 of user core. Mar 25 01:16:02.770477 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 25 01:16:02.888761 sshd[2231]: Connection closed by 147.75.109.163 port 49248 Mar 25 01:16:02.889626 sshd-session[2229]: pam_unix(sshd:session): session closed for user core Mar 25 01:16:02.976253 systemd[1]: sshd@2-172.31.23.121:22-147.75.109.163:49248.service: Deactivated successfully. Mar 25 01:16:02.981860 systemd[1]: session-3.scope: Deactivated successfully. Mar 25 01:16:02.985386 systemd-logind[1936]: Session 3 logged out. Waiting for processes to exit. Mar 25 01:16:02.988868 systemd[1]: Started sshd@3-172.31.23.121:22-147.75.109.163:49256.service - OpenSSH per-connection server daemon (147.75.109.163:49256). Mar 25 01:16:02.992483 systemd-logind[1936]: Removed session 3. Mar 25 01:16:03.184249 sshd[2236]: Accepted publickey for core from 147.75.109.163 port 49256 ssh2: RSA SHA256:PeYl6nTqnDkQzDdjcMK19FTRt4hKUhyBe4JyMaX2oCc Mar 25 01:16:03.186765 sshd-session[2236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:16:03.198968 systemd-logind[1936]: New session 4 of user core. Mar 25 01:16:03.207519 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 25 01:16:03.325773 kubelet[2201]: E0325 01:16:03.325681 2201 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:16:03.329338 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:16:03.330029 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:16:03.331355 systemd[1]: kubelet.service: Consumed 1.278s CPU time, 232.4M memory peak. Mar 25 01:16:03.340592 sshd[2240]: Connection closed by 147.75.109.163 port 49256 Mar 25 01:16:03.342557 sshd-session[2236]: pam_unix(sshd:session): session closed for user core Mar 25 01:16:03.347425 systemd[1]: sshd@3-172.31.23.121:22-147.75.109.163:49256.service: Deactivated successfully. Mar 25 01:16:03.350692 systemd[1]: session-4.scope: Deactivated successfully. Mar 25 01:16:03.354907 systemd-logind[1936]: Session 4 logged out. Waiting for processes to exit. Mar 25 01:16:03.356785 systemd-logind[1936]: Removed session 4. Mar 25 01:16:03.375454 systemd[1]: Started sshd@4-172.31.23.121:22-147.75.109.163:49266.service - OpenSSH per-connection server daemon (147.75.109.163:49266). Mar 25 01:16:03.569640 sshd[2247]: Accepted publickey for core from 147.75.109.163 port 49266 ssh2: RSA SHA256:PeYl6nTqnDkQzDdjcMK19FTRt4hKUhyBe4JyMaX2oCc Mar 25 01:16:03.572023 sshd-session[2247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:16:03.579868 systemd-logind[1936]: New session 5 of user core. Mar 25 01:16:03.592467 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 25 01:16:03.707989 sudo[2250]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 25 01:16:03.708818 sudo[2250]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:16:03.723814 sudo[2250]: pam_unix(sudo:session): session closed for user root Mar 25 01:16:03.748105 sshd[2249]: Connection closed by 147.75.109.163 port 49266 Mar 25 01:16:03.746853 sshd-session[2247]: pam_unix(sshd:session): session closed for user core Mar 25 01:16:03.752734 systemd[1]: sshd@4-172.31.23.121:22-147.75.109.163:49266.service: Deactivated successfully. Mar 25 01:16:03.755632 systemd[1]: session-5.scope: Deactivated successfully. Mar 25 01:16:03.758802 systemd-logind[1936]: Session 5 logged out. Waiting for processes to exit. Mar 25 01:16:03.760962 systemd-logind[1936]: Removed session 5. Mar 25 01:16:03.780797 systemd[1]: Started sshd@5-172.31.23.121:22-147.75.109.163:49272.service - OpenSSH per-connection server daemon (147.75.109.163:49272). Mar 25 01:16:03.978789 sshd[2256]: Accepted publickey for core from 147.75.109.163 port 49272 ssh2: RSA SHA256:PeYl6nTqnDkQzDdjcMK19FTRt4hKUhyBe4JyMaX2oCc Mar 25 01:16:03.981300 sshd-session[2256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:16:03.991282 systemd-logind[1936]: New session 6 of user core. Mar 25 01:16:03.998485 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 25 01:16:04.102572 sudo[2261]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 25 01:16:04.103164 sudo[2261]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:16:04.110413 sudo[2261]: pam_unix(sudo:session): session closed for user root Mar 25 01:16:04.120300 sudo[2260]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 25 01:16:04.120918 sudo[2260]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:16:04.136909 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 25 01:16:04.199529 augenrules[2283]: No rules Mar 25 01:16:04.201896 systemd[1]: audit-rules.service: Deactivated successfully. Mar 25 01:16:04.202677 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 25 01:16:04.204627 sudo[2260]: pam_unix(sudo:session): session closed for user root Mar 25 01:16:04.228309 sshd[2259]: Connection closed by 147.75.109.163 port 49272 Mar 25 01:16:04.229054 sshd-session[2256]: pam_unix(sshd:session): session closed for user core Mar 25 01:16:04.236429 systemd[1]: sshd@5-172.31.23.121:22-147.75.109.163:49272.service: Deactivated successfully. Mar 25 01:16:04.239601 systemd[1]: session-6.scope: Deactivated successfully. Mar 25 01:16:04.242123 systemd-logind[1936]: Session 6 logged out. Waiting for processes to exit. Mar 25 01:16:04.244103 systemd-logind[1936]: Removed session 6. Mar 25 01:16:04.264509 systemd[1]: Started sshd@6-172.31.23.121:22-147.75.109.163:49286.service - OpenSSH per-connection server daemon (147.75.109.163:49286). Mar 25 01:16:04.458596 sshd[2292]: Accepted publickey for core from 147.75.109.163 port 49286 ssh2: RSA SHA256:PeYl6nTqnDkQzDdjcMK19FTRt4hKUhyBe4JyMaX2oCc Mar 25 01:16:04.461003 sshd-session[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:16:04.469709 systemd-logind[1936]: New session 7 of user core. Mar 25 01:16:04.480472 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 25 01:16:04.584591 sudo[2295]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 25 01:16:04.585240 sudo[2295]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:16:05.079313 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 25 01:16:05.095696 (dockerd)[2312]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 25 01:16:05.434379 dockerd[2312]: time="2025-03-25T01:16:05.434156067Z" level=info msg="Starting up" Mar 25 01:16:05.441239 dockerd[2312]: time="2025-03-25T01:16:05.441098127Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 25 01:16:05.495773 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3048657333-merged.mount: Deactivated successfully. Mar 25 01:16:05.651263 dockerd[2312]: time="2025-03-25T01:16:05.650992696Z" level=info msg="Loading containers: start." Mar 25 01:16:05.889420 kernel: Initializing XFRM netlink socket Mar 25 01:16:05.891764 (udev-worker)[2337]: Network interface NamePolicy= disabled on kernel command line. Mar 25 01:16:06.009677 systemd-networkd[1856]: docker0: Link UP Mar 25 01:16:06.077594 dockerd[2312]: time="2025-03-25T01:16:06.077525246Z" level=info msg="Loading containers: done." Mar 25 01:16:06.103269 dockerd[2312]: time="2025-03-25T01:16:06.102960254Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 25 01:16:06.103269 dockerd[2312]: time="2025-03-25T01:16:06.103075874Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 Mar 25 01:16:06.103540 dockerd[2312]: time="2025-03-25T01:16:06.103337774Z" level=info msg="Daemon has completed initialization" Mar 25 01:16:06.155259 dockerd[2312]: time="2025-03-25T01:16:06.155034758Z" level=info msg="API listen on /run/docker.sock" Mar 25 01:16:06.156105 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 25 01:16:06.484846 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3252759807-merged.mount: Deactivated successfully. Mar 25 01:16:07.268541 containerd[1952]: time="2025-03-25T01:16:07.268483653Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\"" Mar 25 01:16:07.899105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1178454068.mount: Deactivated successfully. Mar 25 01:16:09.964708 containerd[1952]: time="2025-03-25T01:16:09.964621519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:16:09.967103 containerd[1952]: time="2025-03-25T01:16:09.967021831Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.7: active requests=0, bytes read=25552766" Mar 25 01:16:09.968471 containerd[1952]: time="2025-03-25T01:16:09.968412678Z" level=info msg="ImageCreate event name:\"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:16:09.975508 containerd[1952]: time="2025-03-25T01:16:09.975422369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:16:09.977345 containerd[1952]: time="2025-03-25T01:16:09.977284248Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.7\" with image id \"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\", size \"25549566\" in 2.708734261s" Mar 25 01:16:09.977456 containerd[1952]: time="2025-03-25T01:16:09.977346631Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\" returns image reference \"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\"" Mar 25 01:16:09.978814 containerd[1952]: time="2025-03-25T01:16:09.978529126Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\"" Mar 25 01:16:12.059005 containerd[1952]: time="2025-03-25T01:16:12.058831043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:16:12.060693 containerd[1952]: time="2025-03-25T01:16:12.060598411Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.7: active requests=0, bytes read=22458978" Mar 25 01:16:12.061873 containerd[1952]: time="2025-03-25T01:16:12.061799911Z" level=info msg="ImageCreate event name:\"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:16:12.066337 containerd[1952]: time="2025-03-25T01:16:12.066281979Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:16:12.068517 containerd[1952]: time="2025-03-25T01:16:12.068337478Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.7\" with image id \"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\", size \"23899774\" in 2.089752164s" Mar 25 01:16:12.068517 containerd[1952]: time="2025-03-25T01:16:12.068390293Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\" returns image reference \"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\"" Mar 25 01:16:12.069243 containerd[1952]: time="2025-03-25T01:16:12.069176181Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\"" Mar 25 01:16:13.469684 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 25 01:16:13.472708 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:16:13.791790 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:16:13.806704 (kubelet)[2578]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:16:13.879668 kubelet[2578]: E0325 01:16:13.879576 2578 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:16:13.889786 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:16:13.890162 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:16:13.891786 systemd[1]: kubelet.service: Consumed 302ms CPU time, 94M memory peak. Mar 25 01:16:14.410820 containerd[1952]: time="2025-03-25T01:16:14.410767236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:16:14.413106 containerd[1952]: time="2025-03-25T01:16:14.413029131Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.7: active requests=0, bytes read=17125829" Mar 25 01:16:14.414704 containerd[1952]: time="2025-03-25T01:16:14.414626241Z" level=info msg="ImageCreate event name:\"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:16:14.419420 containerd[1952]: time="2025-03-25T01:16:14.419336699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:16:14.421510 containerd[1952]: time="2025-03-25T01:16:14.421336514Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.7\" with image id \"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\", size \"18566643\" in 2.352081945s" Mar 25 01:16:14.421510 containerd[1952]: time="2025-03-25T01:16:14.421389041Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\" returns image reference \"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\"" Mar 25 01:16:14.422996 containerd[1952]: time="2025-03-25T01:16:14.422921967Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 25 01:16:15.718608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount400722661.mount: Deactivated successfully. Mar 25 01:16:16.259958 containerd[1952]: time="2025-03-25T01:16:16.259881892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:16:16.261253 containerd[1952]: time="2025-03-25T01:16:16.261156773Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.7: active requests=0, bytes read=26871915" Mar 25 01:16:16.263398 containerd[1952]: time="2025-03-25T01:16:16.263300745Z" level=info msg="ImageCreate event name:\"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:16:16.266434 containerd[1952]: time="2025-03-25T01:16:16.266331972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:16:16.267761 containerd[1952]: time="2025-03-25T01:16:16.267577330Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.7\" with image id \"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\", repo tag \"registry.k8s.io/kube-proxy:v1.31.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\", size \"26870934\" in 1.844572006s" Mar 25 01:16:16.267761 containerd[1952]: time="2025-03-25T01:16:16.267628344Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\"" Mar 25 01:16:16.268751 containerd[1952]: time="2025-03-25T01:16:16.268446721Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 25 01:16:16.797174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1773257118.mount: Deactivated successfully. Mar 25 01:16:17.899283 containerd[1952]: time="2025-03-25T01:16:17.898820375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:16:17.901536 containerd[1952]: time="2025-03-25T01:16:17.901409529Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Mar 25 01:16:17.903573 containerd[1952]: time="2025-03-25T01:16:17.903414303Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:16:17.911445 containerd[1952]: time="2025-03-25T01:16:17.911302053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:16:17.913312 containerd[1952]: time="2025-03-25T01:16:17.912663822Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.644164718s" Mar 25 01:16:17.913312 containerd[1952]: time="2025-03-25T01:16:17.912725329Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Mar 25 01:16:17.913789 containerd[1952]: time="2025-03-25T01:16:17.913749128Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 25 01:16:18.409705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4124087041.mount: Deactivated successfully. Mar 25 01:16:18.421604 containerd[1952]: time="2025-03-25T01:16:18.421520126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:16:18.423313 containerd[1952]: time="2025-03-25T01:16:18.423224822Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Mar 25 01:16:18.425572 containerd[1952]: time="2025-03-25T01:16:18.425480918Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:16:18.429896 containerd[1952]: time="2025-03-25T01:16:18.429794710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:16:18.431336 containerd[1952]: time="2025-03-25T01:16:18.431120784Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 517.204425ms" Mar 25 01:16:18.431336 containerd[1952]: time="2025-03-25T01:16:18.431177177Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 25 01:16:18.432267 containerd[1952]: time="2025-03-25T01:16:18.432147670Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Mar 25 01:16:19.053782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2028314924.mount: Deactivated successfully. Mar 25 01:16:23.350346 containerd[1952]: time="2025-03-25T01:16:23.350268495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:16:23.352296 containerd[1952]: time="2025-03-25T01:16:23.352220983Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406425" Mar 25 01:16:23.353699 containerd[1952]: time="2025-03-25T01:16:23.353607088Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:16:23.360724 containerd[1952]: time="2025-03-25T01:16:23.360648283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:16:23.363065 containerd[1952]: time="2025-03-25T01:16:23.362983558Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 4.930784395s" Mar 25 01:16:23.363065 containerd[1952]: time="2025-03-25T01:16:23.363051151Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Mar 25 01:16:23.969698 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 25 01:16:23.976476 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:16:24.323450 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:16:24.341671 (kubelet)[2726]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:16:24.424231 kubelet[2726]: E0325 01:16:24.422237 2726 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:16:24.426729 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:16:24.427052 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:16:24.427953 systemd[1]: kubelet.service: Consumed 294ms CPU time, 96.6M memory peak. Mar 25 01:16:29.939535 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 25 01:16:30.179862 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:16:30.180247 systemd[1]: kubelet.service: Consumed 294ms CPU time, 96.6M memory peak. Mar 25 01:16:30.184049 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:16:30.250519 systemd[1]: Reload requested from client PID 2743 ('systemctl') (unit session-7.scope)... Mar 25 01:16:30.250744 systemd[1]: Reloading... Mar 25 01:16:30.534284 zram_generator::config[2791]: No configuration found. Mar 25 01:16:30.753727 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:16:30.976509 systemd[1]: Reloading finished in 724 ms. Mar 25 01:16:31.069990 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:16:31.077750 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:16:31.082848 systemd[1]: kubelet.service: Deactivated successfully. Mar 25 01:16:31.083392 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:16:31.083481 systemd[1]: kubelet.service: Consumed 234ms CPU time, 82.3M memory peak. Mar 25 01:16:31.086970 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:16:31.395963 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:16:31.412764 (kubelet)[2853]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 25 01:16:31.482402 kubelet[2853]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:16:31.482402 kubelet[2853]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 25 01:16:31.482402 kubelet[2853]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:16:31.482942 kubelet[2853]: I0325 01:16:31.482532 2853 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 25 01:16:32.745402 kubelet[2853]: I0325 01:16:32.745334 2853 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 25 01:16:32.745402 kubelet[2853]: I0325 01:16:32.745385 2853 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 25 01:16:32.746010 kubelet[2853]: I0325 01:16:32.745824 2853 server.go:929] "Client rotation is on, will bootstrap in background" Mar 25 01:16:32.789449 kubelet[2853]: E0325 01:16:32.789387 2853 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.23.121:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.23.121:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:16:32.791223 kubelet[2853]: I0325 01:16:32.790980 2853 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 25 01:16:32.811454 kubelet[2853]: I0325 01:16:32.811406 2853 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 25 01:16:32.818082 kubelet[2853]: I0325 01:16:32.818029 2853 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 25 01:16:32.819392 kubelet[2853]: I0325 01:16:32.819345 2853 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 25 01:16:32.819728 kubelet[2853]: I0325 01:16:32.819663 2853 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 25 01:16:32.820020 kubelet[2853]: I0325 01:16:32.819720 2853 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-121","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 25 01:16:32.820230 kubelet[2853]: I0325 01:16:32.820048 2853 topology_manager.go:138] "Creating topology manager with none policy" Mar 25 01:16:32.820230 kubelet[2853]: I0325 01:16:32.820070 2853 container_manager_linux.go:300] "Creating device plugin manager" Mar 25 01:16:32.820352 kubelet[2853]: I0325 01:16:32.820284 2853 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:16:32.825015 kubelet[2853]: I0325 01:16:32.824450 2853 kubelet.go:408] "Attempting to sync node with API server" Mar 25 01:16:32.825015 kubelet[2853]: I0325 01:16:32.824505 2853 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 25 01:16:32.825015 kubelet[2853]: I0325 01:16:32.824552 2853 kubelet.go:314] "Adding apiserver pod source" Mar 25 01:16:32.825015 kubelet[2853]: I0325 01:16:32.824581 2853 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 25 01:16:32.832290 kubelet[2853]: W0325 01:16:32.832171 2853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-121&limit=500&resourceVersion=0": dial tcp 172.31.23.121:6443: connect: connection refused Mar 25 01:16:32.832443 kubelet[2853]: E0325 01:16:32.832310 2853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.23.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-121&limit=500&resourceVersion=0\": dial tcp 172.31.23.121:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:16:32.834734 kubelet[2853]: W0325 01:16:32.834255 2853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.121:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.23.121:6443: connect: connection refused Mar 25 01:16:32.834734 kubelet[2853]: E0325 01:16:32.834356 2853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.23.121:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.23.121:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:16:32.834734 kubelet[2853]: I0325 01:16:32.834503 2853 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 25 01:16:32.838348 kubelet[2853]: I0325 01:16:32.838311 2853 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 25 01:16:32.840754 kubelet[2853]: W0325 01:16:32.839618 2853 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 25 01:16:32.841198 kubelet[2853]: I0325 01:16:32.841145 2853 server.go:1269] "Started kubelet" Mar 25 01:16:32.844829 kubelet[2853]: I0325 01:16:32.844232 2853 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 25 01:16:32.846239 kubelet[2853]: I0325 01:16:32.846188 2853 server.go:460] "Adding debug handlers to kubelet server" Mar 25 01:16:32.847261 kubelet[2853]: I0325 01:16:32.846532 2853 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 25 01:16:32.847261 kubelet[2853]: I0325 01:16:32.847052 2853 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 25 01:16:32.850111 kubelet[2853]: E0325 01:16:32.847269 2853 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.121:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.121:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-121.182fe6d7379daed6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-121,UID:ip-172-31-23-121,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-121,},FirstTimestamp:2025-03-25 01:16:32.841109206 +0000 UTC m=+1.422551295,LastTimestamp:2025-03-25 01:16:32.841109206 +0000 UTC m=+1.422551295,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-121,}" Mar 25 01:16:32.851900 kubelet[2853]: I0325 01:16:32.851868 2853 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 25 01:16:32.854909 kubelet[2853]: I0325 01:16:32.853904 2853 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 25 01:16:32.859186 kubelet[2853]: E0325 01:16:32.858804 2853 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-23-121\" not found" Mar 25 01:16:32.859186 kubelet[2853]: I0325 01:16:32.858907 2853 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 25 01:16:32.859444 kubelet[2853]: I0325 01:16:32.859263 2853 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 25 01:16:32.859444 kubelet[2853]: I0325 01:16:32.859357 2853 reconciler.go:26] "Reconciler: start to sync state" Mar 25 01:16:32.859962 kubelet[2853]: W0325 01:16:32.859889 2853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.121:6443: connect: connection refused Mar 25 01:16:32.860076 kubelet[2853]: E0325 01:16:32.859974 2853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.23.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.23.121:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:16:32.860625 kubelet[2853]: E0325 01:16:32.860183 2853 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 25 01:16:32.860776 kubelet[2853]: E0325 01:16:32.860634 2853 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-121?timeout=10s\": dial tcp 172.31.23.121:6443: connect: connection refused" interval="200ms" Mar 25 01:16:32.861002 kubelet[2853]: I0325 01:16:32.860962 2853 factory.go:221] Registration of the systemd container factory successfully Mar 25 01:16:32.861145 kubelet[2853]: I0325 01:16:32.861106 2853 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 25 01:16:32.864012 kubelet[2853]: I0325 01:16:32.863963 2853 factory.go:221] Registration of the containerd container factory successfully Mar 25 01:16:32.906047 kubelet[2853]: I0325 01:16:32.905663 2853 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 25 01:16:32.906047 kubelet[2853]: I0325 01:16:32.905696 2853 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 25 01:16:32.906047 kubelet[2853]: I0325 01:16:32.905728 2853 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:16:32.906566 kubelet[2853]: I0325 01:16:32.906492 2853 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 25 01:16:32.909061 kubelet[2853]: I0325 01:16:32.909014 2853 policy_none.go:49] "None policy: Start" Mar 25 01:16:32.909633 kubelet[2853]: I0325 01:16:32.909578 2853 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 25 01:16:32.910301 kubelet[2853]: I0325 01:16:32.909817 2853 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 25 01:16:32.910301 kubelet[2853]: I0325 01:16:32.909856 2853 kubelet.go:2321] "Starting kubelet main sync loop" Mar 25 01:16:32.910301 kubelet[2853]: E0325 01:16:32.909924 2853 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 25 01:16:32.915712 kubelet[2853]: W0325 01:16:32.915626 2853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.121:6443: connect: connection refused Mar 25 01:16:32.915860 kubelet[2853]: E0325 01:16:32.915721 2853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.23.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.23.121:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:16:32.918253 kubelet[2853]: I0325 01:16:32.918176 2853 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 25 01:16:32.918375 kubelet[2853]: I0325 01:16:32.918266 2853 state_mem.go:35] "Initializing new in-memory state store" Mar 25 01:16:32.929891 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 25 01:16:32.948125 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 25 01:16:32.955572 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 25 01:16:32.959418 kubelet[2853]: E0325 01:16:32.959376 2853 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-23-121\" not found" Mar 25 01:16:32.964824 kubelet[2853]: I0325 01:16:32.964780 2853 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 25 01:16:32.965749 kubelet[2853]: I0325 01:16:32.965093 2853 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 25 01:16:32.965749 kubelet[2853]: I0325 01:16:32.965113 2853 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 25 01:16:32.965749 kubelet[2853]: I0325 01:16:32.965541 2853 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 25 01:16:32.969022 kubelet[2853]: E0325 01:16:32.968845 2853 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-23-121\" not found" Mar 25 01:16:33.027929 systemd[1]: Created slice kubepods-burstable-podd4078446864e9d3909e7cefefa23a2fc.slice - libcontainer container kubepods-burstable-podd4078446864e9d3909e7cefefa23a2fc.slice. Mar 25 01:16:33.045832 systemd[1]: Created slice kubepods-burstable-pod79ab5c5898d3f263297c1e096fac44cc.slice - libcontainer container kubepods-burstable-pod79ab5c5898d3f263297c1e096fac44cc.slice. Mar 25 01:16:33.061454 kubelet[2853]: I0325 01:16:33.059996 2853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4078446864e9d3909e7cefefa23a2fc-ca-certs\") pod \"kube-apiserver-ip-172-31-23-121\" (UID: \"d4078446864e9d3909e7cefefa23a2fc\") " pod="kube-system/kube-apiserver-ip-172-31-23-121" Mar 25 01:16:33.061454 kubelet[2853]: I0325 01:16:33.060078 2853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4078446864e9d3909e7cefefa23a2fc-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-121\" (UID: \"d4078446864e9d3909e7cefefa23a2fc\") " pod="kube-system/kube-apiserver-ip-172-31-23-121" Mar 25 01:16:33.061454 kubelet[2853]: I0325 01:16:33.060154 2853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4078446864e9d3909e7cefefa23a2fc-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-121\" (UID: \"d4078446864e9d3909e7cefefa23a2fc\") " pod="kube-system/kube-apiserver-ip-172-31-23-121" Mar 25 01:16:33.061454 kubelet[2853]: I0325 01:16:33.060242 2853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/79ab5c5898d3f263297c1e096fac44cc-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-121\" (UID: \"79ab5c5898d3f263297c1e096fac44cc\") " pod="kube-system/kube-controller-manager-ip-172-31-23-121" Mar 25 01:16:33.061454 kubelet[2853]: I0325 01:16:33.060307 2853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/79ab5c5898d3f263297c1e096fac44cc-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-121\" (UID: \"79ab5c5898d3f263297c1e096fac44cc\") " pod="kube-system/kube-controller-manager-ip-172-31-23-121" Mar 25 01:16:33.061799 kubelet[2853]: I0325 01:16:33.060345 2853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/79ab5c5898d3f263297c1e096fac44cc-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-121\" (UID: \"79ab5c5898d3f263297c1e096fac44cc\") " pod="kube-system/kube-controller-manager-ip-172-31-23-121" Mar 25 01:16:33.061799 kubelet[2853]: I0325 01:16:33.060410 2853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f1156b1eb01804383b87170a6db946d7-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-121\" (UID: \"f1156b1eb01804383b87170a6db946d7\") " pod="kube-system/kube-scheduler-ip-172-31-23-121" Mar 25 01:16:33.061799 kubelet[2853]: I0325 01:16:33.060469 2853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/79ab5c5898d3f263297c1e096fac44cc-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-121\" (UID: \"79ab5c5898d3f263297c1e096fac44cc\") " pod="kube-system/kube-controller-manager-ip-172-31-23-121" Mar 25 01:16:33.061799 kubelet[2853]: I0325 01:16:33.060510 2853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/79ab5c5898d3f263297c1e096fac44cc-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-121\" (UID: \"79ab5c5898d3f263297c1e096fac44cc\") " pod="kube-system/kube-controller-manager-ip-172-31-23-121" Mar 25 01:16:33.061799 kubelet[2853]: E0325 01:16:33.061252 2853 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-121?timeout=10s\": dial tcp 172.31.23.121:6443: connect: connection refused" interval="400ms" Mar 25 01:16:33.062235 systemd[1]: Created slice kubepods-burstable-podf1156b1eb01804383b87170a6db946d7.slice - libcontainer container kubepods-burstable-podf1156b1eb01804383b87170a6db946d7.slice. Mar 25 01:16:33.068124 kubelet[2853]: I0325 01:16:33.067652 2853 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-121" Mar 25 01:16:33.068325 kubelet[2853]: E0325 01:16:33.068245 2853 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.23.121:6443/api/v1/nodes\": dial tcp 172.31.23.121:6443: connect: connection refused" node="ip-172-31-23-121" Mar 25 01:16:33.271853 kubelet[2853]: I0325 01:16:33.271806 2853 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-121" Mar 25 01:16:33.272506 kubelet[2853]: E0325 01:16:33.272455 2853 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.23.121:6443/api/v1/nodes\": dial tcp 172.31.23.121:6443: connect: connection refused" node="ip-172-31-23-121" Mar 25 01:16:33.341905 containerd[1952]: time="2025-03-25T01:16:33.341744450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-121,Uid:d4078446864e9d3909e7cefefa23a2fc,Namespace:kube-system,Attempt:0,}" Mar 25 01:16:33.356923 containerd[1952]: time="2025-03-25T01:16:33.356863750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-121,Uid:79ab5c5898d3f263297c1e096fac44cc,Namespace:kube-system,Attempt:0,}" Mar 25 01:16:33.368255 containerd[1952]: time="2025-03-25T01:16:33.367440844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-121,Uid:f1156b1eb01804383b87170a6db946d7,Namespace:kube-system,Attempt:0,}" Mar 25 01:16:33.447358 containerd[1952]: time="2025-03-25T01:16:33.446734784Z" level=info msg="connecting to shim cb17caaa3074f43bb8e25b0a8140e63e99b218ee2dbaa1446f60d0221a6bacb0" address="unix:///run/containerd/s/83876955420b61c894fefbe01e35e5ad5f6328a3ad362fc3b391725e37a875e9" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:16:33.463562 kubelet[2853]: E0325 01:16:33.463511 2853 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-121?timeout=10s\": dial tcp 172.31.23.121:6443: connect: connection refused" interval="800ms" Mar 25 01:16:33.483424 containerd[1952]: time="2025-03-25T01:16:33.483350311Z" level=info msg="connecting to shim 39aff35496c84746f4f138361298a2725d76b2df60f08965d36993c248838796" address="unix:///run/containerd/s/ac38895cf891cdb2a284ce4f625088732a3ab1be90883ca80939802882583d95" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:16:33.499485 containerd[1952]: time="2025-03-25T01:16:33.499416404Z" level=info msg="connecting to shim f8056cbfc2d40a7372347a589589128ac016493fd7127c493ba124690ab8d4c3" address="unix:///run/containerd/s/df68cdaa707a7bdf8b24b71d23bf13c1b336ff04d73c5f5644c82e04b20ba740" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:16:33.510494 systemd[1]: Started cri-containerd-cb17caaa3074f43bb8e25b0a8140e63e99b218ee2dbaa1446f60d0221a6bacb0.scope - libcontainer container cb17caaa3074f43bb8e25b0a8140e63e99b218ee2dbaa1446f60d0221a6bacb0. Mar 25 01:16:33.563824 systemd[1]: Started cri-containerd-39aff35496c84746f4f138361298a2725d76b2df60f08965d36993c248838796.scope - libcontainer container 39aff35496c84746f4f138361298a2725d76b2df60f08965d36993c248838796. Mar 25 01:16:33.590598 systemd[1]: Started cri-containerd-f8056cbfc2d40a7372347a589589128ac016493fd7127c493ba124690ab8d4c3.scope - libcontainer container f8056cbfc2d40a7372347a589589128ac016493fd7127c493ba124690ab8d4c3. Mar 25 01:16:33.652041 containerd[1952]: time="2025-03-25T01:16:33.651762016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-121,Uid:d4078446864e9d3909e7cefefa23a2fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb17caaa3074f43bb8e25b0a8140e63e99b218ee2dbaa1446f60d0221a6bacb0\"" Mar 25 01:16:33.661712 containerd[1952]: time="2025-03-25T01:16:33.661632809Z" level=info msg="CreateContainer within sandbox \"cb17caaa3074f43bb8e25b0a8140e63e99b218ee2dbaa1446f60d0221a6bacb0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 25 01:16:33.676920 kubelet[2853]: I0325 01:16:33.676881 2853 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-121" Mar 25 01:16:33.678326 kubelet[2853]: E0325 01:16:33.677807 2853 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.23.121:6443/api/v1/nodes\": dial tcp 172.31.23.121:6443: connect: connection refused" node="ip-172-31-23-121" Mar 25 01:16:33.692451 containerd[1952]: time="2025-03-25T01:16:33.692382732Z" level=info msg="Container 040d3b627cff13eb27fed1f324eab05387aff1c1fec4684c5585ce383aa69469: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:16:33.716199 containerd[1952]: time="2025-03-25T01:16:33.715499253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-121,Uid:79ab5c5898d3f263297c1e096fac44cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"39aff35496c84746f4f138361298a2725d76b2df60f08965d36993c248838796\"" Mar 25 01:16:33.722335 containerd[1952]: time="2025-03-25T01:16:33.722270841Z" level=info msg="CreateContainer within sandbox \"39aff35496c84746f4f138361298a2725d76b2df60f08965d36993c248838796\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 25 01:16:33.723452 containerd[1952]: time="2025-03-25T01:16:33.723392718Z" level=info msg="CreateContainer within sandbox \"cb17caaa3074f43bb8e25b0a8140e63e99b218ee2dbaa1446f60d0221a6bacb0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"040d3b627cff13eb27fed1f324eab05387aff1c1fec4684c5585ce383aa69469\"" Mar 25 01:16:33.724285 containerd[1952]: time="2025-03-25T01:16:33.724227951Z" level=info msg="StartContainer for \"040d3b627cff13eb27fed1f324eab05387aff1c1fec4684c5585ce383aa69469\"" Mar 25 01:16:33.726757 containerd[1952]: time="2025-03-25T01:16:33.726678508Z" level=info msg="connecting to shim 040d3b627cff13eb27fed1f324eab05387aff1c1fec4684c5585ce383aa69469" address="unix:///run/containerd/s/83876955420b61c894fefbe01e35e5ad5f6328a3ad362fc3b391725e37a875e9" protocol=ttrpc version=3 Mar 25 01:16:33.747976 containerd[1952]: time="2025-03-25T01:16:33.747896988Z" level=info msg="Container 013b4d22050f439a0fc4a6f919916ee5e1e056998328843fa6017be178f6b7bb: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:16:33.749121 containerd[1952]: time="2025-03-25T01:16:33.748663523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-121,Uid:f1156b1eb01804383b87170a6db946d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8056cbfc2d40a7372347a589589128ac016493fd7127c493ba124690ab8d4c3\"" Mar 25 01:16:33.757724 containerd[1952]: time="2025-03-25T01:16:33.757382844Z" level=info msg="CreateContainer within sandbox \"f8056cbfc2d40a7372347a589589128ac016493fd7127c493ba124690ab8d4c3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 25 01:16:33.771025 containerd[1952]: time="2025-03-25T01:16:33.770969855Z" level=info msg="CreateContainer within sandbox \"39aff35496c84746f4f138361298a2725d76b2df60f08965d36993c248838796\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"013b4d22050f439a0fc4a6f919916ee5e1e056998328843fa6017be178f6b7bb\"" Mar 25 01:16:33.771518 systemd[1]: Started cri-containerd-040d3b627cff13eb27fed1f324eab05387aff1c1fec4684c5585ce383aa69469.scope - libcontainer container 040d3b627cff13eb27fed1f324eab05387aff1c1fec4684c5585ce383aa69469. Mar 25 01:16:33.775699 containerd[1952]: time="2025-03-25T01:16:33.775550168Z" level=info msg="StartContainer for \"013b4d22050f439a0fc4a6f919916ee5e1e056998328843fa6017be178f6b7bb\"" Mar 25 01:16:33.783191 containerd[1952]: time="2025-03-25T01:16:33.782769111Z" level=info msg="connecting to shim 013b4d22050f439a0fc4a6f919916ee5e1e056998328843fa6017be178f6b7bb" address="unix:///run/containerd/s/ac38895cf891cdb2a284ce4f625088732a3ab1be90883ca80939802882583d95" protocol=ttrpc version=3 Mar 25 01:16:33.787449 containerd[1952]: time="2025-03-25T01:16:33.787397665Z" level=info msg="Container cd46281251f21d6444112b03a8ee263c15c339c011bdbd85e9ac685e3c552ecb: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:16:33.806256 kubelet[2853]: W0325 01:16:33.805523 2853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-121&limit=500&resourceVersion=0": dial tcp 172.31.23.121:6443: connect: connection refused Mar 25 01:16:33.807996 kubelet[2853]: E0325 01:16:33.807864 2853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.23.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-121&limit=500&resourceVersion=0\": dial tcp 172.31.23.121:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:16:33.814721 containerd[1952]: time="2025-03-25T01:16:33.814487607Z" level=info msg="CreateContainer within sandbox \"f8056cbfc2d40a7372347a589589128ac016493fd7127c493ba124690ab8d4c3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cd46281251f21d6444112b03a8ee263c15c339c011bdbd85e9ac685e3c552ecb\"" Mar 25 01:16:33.815799 containerd[1952]: time="2025-03-25T01:16:33.815481344Z" level=info msg="StartContainer for \"cd46281251f21d6444112b03a8ee263c15c339c011bdbd85e9ac685e3c552ecb\"" Mar 25 01:16:33.819864 containerd[1952]: time="2025-03-25T01:16:33.819790549Z" level=info msg="connecting to shim cd46281251f21d6444112b03a8ee263c15c339c011bdbd85e9ac685e3c552ecb" address="unix:///run/containerd/s/df68cdaa707a7bdf8b24b71d23bf13c1b336ff04d73c5f5644c82e04b20ba740" protocol=ttrpc version=3 Mar 25 01:16:33.832554 systemd[1]: Started cri-containerd-013b4d22050f439a0fc4a6f919916ee5e1e056998328843fa6017be178f6b7bb.scope - libcontainer container 013b4d22050f439a0fc4a6f919916ee5e1e056998328843fa6017be178f6b7bb. Mar 25 01:16:33.872545 systemd[1]: Started cri-containerd-cd46281251f21d6444112b03a8ee263c15c339c011bdbd85e9ac685e3c552ecb.scope - libcontainer container cd46281251f21d6444112b03a8ee263c15c339c011bdbd85e9ac685e3c552ecb. Mar 25 01:16:33.922578 containerd[1952]: time="2025-03-25T01:16:33.922405015Z" level=info msg="StartContainer for \"040d3b627cff13eb27fed1f324eab05387aff1c1fec4684c5585ce383aa69469\" returns successfully" Mar 25 01:16:33.943232 kubelet[2853]: W0325 01:16:33.941713 2853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.121:6443: connect: connection refused Mar 25 01:16:33.943232 kubelet[2853]: E0325 01:16:33.941795 2853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.23.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.23.121:6443: connect: connection refused" logger="UnhandledError" Mar 25 01:16:34.047758 containerd[1952]: time="2025-03-25T01:16:34.047695250Z" level=info msg="StartContainer for \"013b4d22050f439a0fc4a6f919916ee5e1e056998328843fa6017be178f6b7bb\" returns successfully" Mar 25 01:16:34.067977 containerd[1952]: time="2025-03-25T01:16:34.066770971Z" level=info msg="StartContainer for \"cd46281251f21d6444112b03a8ee263c15c339c011bdbd85e9ac685e3c552ecb\" returns successfully" Mar 25 01:16:34.481494 kubelet[2853]: I0325 01:16:34.481446 2853 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-121" Mar 25 01:16:38.284704 kubelet[2853]: E0325 01:16:38.284631 2853 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-23-121\" not found" node="ip-172-31-23-121" Mar 25 01:16:38.457719 kubelet[2853]: I0325 01:16:38.457453 2853 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-23-121" Mar 25 01:16:38.457719 kubelet[2853]: E0325 01:16:38.457510 2853 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-23-121\": node \"ip-172-31-23-121\" not found" Mar 25 01:16:38.837636 kubelet[2853]: I0325 01:16:38.837307 2853 apiserver.go:52] "Watching apiserver" Mar 25 01:16:38.862249 kubelet[2853]: I0325 01:16:38.859764 2853 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 25 01:16:40.392197 systemd[1]: Reload requested from client PID 3126 ('systemctl') (unit session-7.scope)... Mar 25 01:16:40.392246 systemd[1]: Reloading... Mar 25 01:16:40.596255 zram_generator::config[3181]: No configuration found. Mar 25 01:16:40.811505 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:16:41.064541 systemd[1]: Reloading finished in 671 ms. Mar 25 01:16:41.119596 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:16:41.137542 systemd[1]: kubelet.service: Deactivated successfully. Mar 25 01:16:41.138104 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:16:41.138195 systemd[1]: kubelet.service: Consumed 2.088s CPU time, 118.2M memory peak. Mar 25 01:16:41.141710 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:16:41.470866 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:16:41.487795 (kubelet)[3231]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 25 01:16:41.595021 kubelet[3231]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:16:41.595021 kubelet[3231]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 25 01:16:41.595021 kubelet[3231]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:16:41.596872 kubelet[3231]: I0325 01:16:41.595140 3231 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 25 01:16:41.608916 kubelet[3231]: I0325 01:16:41.608845 3231 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 25 01:16:41.608916 kubelet[3231]: I0325 01:16:41.608899 3231 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 25 01:16:41.609525 kubelet[3231]: I0325 01:16:41.609423 3231 server.go:929] "Client rotation is on, will bootstrap in background" Mar 25 01:16:41.612663 kubelet[3231]: I0325 01:16:41.612584 3231 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 25 01:16:41.617153 kubelet[3231]: I0325 01:16:41.617068 3231 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 25 01:16:41.624959 sudo[3244]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 25 01:16:41.626664 kubelet[3231]: I0325 01:16:41.625778 3231 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 25 01:16:41.626925 sudo[3244]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 25 01:16:41.633266 kubelet[3231]: I0325 01:16:41.632315 3231 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 25 01:16:41.633266 kubelet[3231]: I0325 01:16:41.632577 3231 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 25 01:16:41.633266 kubelet[3231]: I0325 01:16:41.632793 3231 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 25 01:16:41.633266 kubelet[3231]: I0325 01:16:41.632838 3231 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-121","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 25 01:16:41.633650 kubelet[3231]: I0325 01:16:41.633140 3231 topology_manager.go:138] "Creating topology manager with none policy" Mar 25 01:16:41.633650 kubelet[3231]: I0325 01:16:41.633159 3231 container_manager_linux.go:300] "Creating device plugin manager" Mar 25 01:16:41.638264 kubelet[3231]: I0325 01:16:41.637450 3231 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:16:41.638264 kubelet[3231]: I0325 01:16:41.637687 3231 kubelet.go:408] "Attempting to sync node with API server" Mar 25 01:16:41.638264 kubelet[3231]: I0325 01:16:41.637721 3231 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 25 01:16:41.638264 kubelet[3231]: I0325 01:16:41.637816 3231 kubelet.go:314] "Adding apiserver pod source" Mar 25 01:16:41.638264 kubelet[3231]: I0325 01:16:41.637838 3231 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 25 01:16:41.651934 kubelet[3231]: I0325 01:16:41.650628 3231 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 25 01:16:41.654628 kubelet[3231]: I0325 01:16:41.654410 3231 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 25 01:16:41.660734 kubelet[3231]: I0325 01:16:41.659562 3231 server.go:1269] "Started kubelet" Mar 25 01:16:41.663907 kubelet[3231]: I0325 01:16:41.663870 3231 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 25 01:16:41.683058 kubelet[3231]: I0325 01:16:41.683006 3231 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 25 01:16:41.688420 kubelet[3231]: I0325 01:16:41.688338 3231 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 25 01:16:41.690654 kubelet[3231]: I0325 01:16:41.690535 3231 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 25 01:16:41.694161 kubelet[3231]: I0325 01:16:41.694109 3231 server.go:460] "Adding debug handlers to kubelet server" Mar 25 01:16:41.694802 kubelet[3231]: I0325 01:16:41.694759 3231 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 25 01:16:41.700086 kubelet[3231]: E0325 01:16:41.699117 3231 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-23-121\" not found" Mar 25 01:16:41.701327 kubelet[3231]: I0325 01:16:41.700353 3231 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 25 01:16:41.701327 kubelet[3231]: I0325 01:16:41.700689 3231 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 25 01:16:41.701327 kubelet[3231]: I0325 01:16:41.700996 3231 reconciler.go:26] "Reconciler: start to sync state" Mar 25 01:16:41.713196 kubelet[3231]: I0325 01:16:41.713153 3231 factory.go:221] Registration of the systemd container factory successfully Mar 25 01:16:41.713568 kubelet[3231]: I0325 01:16:41.713538 3231 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 25 01:16:41.715524 kubelet[3231]: E0325 01:16:41.715257 3231 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 25 01:16:41.720231 kubelet[3231]: I0325 01:16:41.718527 3231 factory.go:221] Registration of the containerd container factory successfully Mar 25 01:16:41.740473 kubelet[3231]: I0325 01:16:41.740052 3231 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 25 01:16:41.768367 kubelet[3231]: I0325 01:16:41.762501 3231 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 25 01:16:41.768367 kubelet[3231]: I0325 01:16:41.762550 3231 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 25 01:16:41.768367 kubelet[3231]: I0325 01:16:41.762579 3231 kubelet.go:2321] "Starting kubelet main sync loop" Mar 25 01:16:41.768367 kubelet[3231]: E0325 01:16:41.762650 3231 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 25 01:16:41.863544 kubelet[3231]: E0325 01:16:41.863486 3231 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 25 01:16:41.870966 kubelet[3231]: I0325 01:16:41.870836 3231 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 25 01:16:41.872091 kubelet[3231]: I0325 01:16:41.871182 3231 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 25 01:16:41.872091 kubelet[3231]: I0325 01:16:41.871316 3231 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:16:41.872091 kubelet[3231]: I0325 01:16:41.871763 3231 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 25 01:16:41.872091 kubelet[3231]: I0325 01:16:41.871784 3231 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 25 01:16:41.872091 kubelet[3231]: I0325 01:16:41.871818 3231 policy_none.go:49] "None policy: Start" Mar 25 01:16:41.875862 kubelet[3231]: I0325 01:16:41.875111 3231 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 25 01:16:41.875862 kubelet[3231]: I0325 01:16:41.875168 3231 state_mem.go:35] "Initializing new in-memory state store" Mar 25 01:16:41.875862 kubelet[3231]: I0325 01:16:41.875452 3231 state_mem.go:75] "Updated machine memory state" Mar 25 01:16:41.887345 kubelet[3231]: I0325 01:16:41.887294 3231 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 25 01:16:41.887608 kubelet[3231]: I0325 01:16:41.887572 3231 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 25 01:16:41.887683 kubelet[3231]: I0325 01:16:41.887605 3231 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 25 01:16:41.888646 kubelet[3231]: I0325 01:16:41.887948 3231 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 25 01:16:42.017036 kubelet[3231]: I0325 01:16:42.016915 3231 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-121" Mar 25 01:16:42.030695 kubelet[3231]: I0325 01:16:42.030158 3231 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-23-121" Mar 25 01:16:42.030695 kubelet[3231]: I0325 01:16:42.030458 3231 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-23-121" Mar 25 01:16:42.111814 kubelet[3231]: I0325 01:16:42.111403 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4078446864e9d3909e7cefefa23a2fc-ca-certs\") pod \"kube-apiserver-ip-172-31-23-121\" (UID: \"d4078446864e9d3909e7cefefa23a2fc\") " pod="kube-system/kube-apiserver-ip-172-31-23-121" Mar 25 01:16:42.111814 kubelet[3231]: I0325 01:16:42.111468 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4078446864e9d3909e7cefefa23a2fc-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-121\" (UID: \"d4078446864e9d3909e7cefefa23a2fc\") " pod="kube-system/kube-apiserver-ip-172-31-23-121" Mar 25 01:16:42.111814 kubelet[3231]: I0325 01:16:42.111510 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/79ab5c5898d3f263297c1e096fac44cc-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-121\" (UID: \"79ab5c5898d3f263297c1e096fac44cc\") " pod="kube-system/kube-controller-manager-ip-172-31-23-121" Mar 25 01:16:42.111814 kubelet[3231]: I0325 01:16:42.111551 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/79ab5c5898d3f263297c1e096fac44cc-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-121\" (UID: \"79ab5c5898d3f263297c1e096fac44cc\") " pod="kube-system/kube-controller-manager-ip-172-31-23-121" Mar 25 01:16:42.111814 kubelet[3231]: I0325 01:16:42.111587 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/79ab5c5898d3f263297c1e096fac44cc-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-121\" (UID: \"79ab5c5898d3f263297c1e096fac44cc\") " pod="kube-system/kube-controller-manager-ip-172-31-23-121" Mar 25 01:16:42.112192 kubelet[3231]: I0325 01:16:42.111624 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/79ab5c5898d3f263297c1e096fac44cc-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-121\" (UID: \"79ab5c5898d3f263297c1e096fac44cc\") " pod="kube-system/kube-controller-manager-ip-172-31-23-121" Mar 25 01:16:42.112192 kubelet[3231]: I0325 01:16:42.111665 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f1156b1eb01804383b87170a6db946d7-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-121\" (UID: \"f1156b1eb01804383b87170a6db946d7\") " pod="kube-system/kube-scheduler-ip-172-31-23-121" Mar 25 01:16:42.112192 kubelet[3231]: I0325 01:16:42.111718 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4078446864e9d3909e7cefefa23a2fc-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-121\" (UID: \"d4078446864e9d3909e7cefefa23a2fc\") " pod="kube-system/kube-apiserver-ip-172-31-23-121" Mar 25 01:16:42.112192 kubelet[3231]: I0325 01:16:42.111760 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/79ab5c5898d3f263297c1e096fac44cc-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-121\" (UID: \"79ab5c5898d3f263297c1e096fac44cc\") " pod="kube-system/kube-controller-manager-ip-172-31-23-121" Mar 25 01:16:42.575494 sudo[3244]: pam_unix(sudo:session): session closed for user root Mar 25 01:16:42.642545 kubelet[3231]: I0325 01:16:42.642475 3231 apiserver.go:52] "Watching apiserver" Mar 25 01:16:42.702554 kubelet[3231]: I0325 01:16:42.702475 3231 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 25 01:16:42.860986 kubelet[3231]: I0325 01:16:42.859682 3231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-23-121" podStartSLOduration=0.859659027 podStartE2EDuration="859.659027ms" podCreationTimestamp="2025-03-25 01:16:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:16:42.854848523 +0000 UTC m=+1.354995945" watchObservedRunningTime="2025-03-25 01:16:42.859659027 +0000 UTC m=+1.359806449" Mar 25 01:16:42.911009 kubelet[3231]: I0325 01:16:42.910377 3231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-23-121" podStartSLOduration=0.910355743 podStartE2EDuration="910.355743ms" podCreationTimestamp="2025-03-25 01:16:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:16:42.881116511 +0000 UTC m=+1.381263945" watchObservedRunningTime="2025-03-25 01:16:42.910355743 +0000 UTC m=+1.410503177" Mar 25 01:16:42.911009 kubelet[3231]: I0325 01:16:42.910508 3231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-23-121" podStartSLOduration=0.910499275 podStartE2EDuration="910.499275ms" podCreationTimestamp="2025-03-25 01:16:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:16:42.907119358 +0000 UTC m=+1.407266792" watchObservedRunningTime="2025-03-25 01:16:42.910499275 +0000 UTC m=+1.410646685" Mar 25 01:16:44.664979 update_engine[1938]: I20250325 01:16:44.664426 1938 update_attempter.cc:509] Updating boot flags... Mar 25 01:16:44.777909 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3297) Mar 25 01:16:45.573606 sudo[2295]: pam_unix(sudo:session): session closed for user root Mar 25 01:16:45.596655 sshd[2294]: Connection closed by 147.75.109.163 port 49286 Mar 25 01:16:45.597516 sshd-session[2292]: pam_unix(sshd:session): session closed for user core Mar 25 01:16:45.607370 systemd[1]: sshd@6-172.31.23.121:22-147.75.109.163:49286.service: Deactivated successfully. Mar 25 01:16:45.607730 systemd-logind[1936]: Session 7 logged out. Waiting for processes to exit. Mar 25 01:16:45.612770 systemd[1]: session-7.scope: Deactivated successfully. Mar 25 01:16:45.614411 systemd[1]: session-7.scope: Consumed 10.583s CPU time, 262.3M memory peak. Mar 25 01:16:45.617132 systemd-logind[1936]: Removed session 7. Mar 25 01:16:46.892477 kubelet[3231]: I0325 01:16:46.892420 3231 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 25 01:16:46.893072 containerd[1952]: time="2025-03-25T01:16:46.892865059Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 25 01:16:46.895819 kubelet[3231]: I0325 01:16:46.895728 3231 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 25 01:16:47.711635 systemd[1]: Created slice kubepods-besteffort-pod149e2e56_36ec_48b6_893a_4b824455c536.slice - libcontainer container kubepods-besteffort-pod149e2e56_36ec_48b6_893a_4b824455c536.slice. Mar 25 01:16:47.743707 systemd[1]: Created slice kubepods-burstable-pod0969260e_64dc_4bd0_bf5c_fdf82c8250da.slice - libcontainer container kubepods-burstable-pod0969260e_64dc_4bd0_bf5c_fdf82c8250da.slice. Mar 25 01:16:47.752470 kubelet[3231]: I0325 01:16:47.750704 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0969260e-64dc-4bd0-bf5c-fdf82c8250da-clustermesh-secrets\") pod \"cilium-6ls5n\" (UID: \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\") " pod="kube-system/cilium-6ls5n" Mar 25 01:16:47.752470 kubelet[3231]: I0325 01:16:47.750767 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/149e2e56-36ec-48b6-893a-4b824455c536-xtables-lock\") pod \"kube-proxy-njj4b\" (UID: \"149e2e56-36ec-48b6-893a-4b824455c536\") " pod="kube-system/kube-proxy-njj4b" Mar 25 01:16:47.752470 kubelet[3231]: I0325 01:16:47.750804 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0969260e-64dc-4bd0-bf5c-fdf82c8250da-cilium-config-path\") pod \"cilium-6ls5n\" (UID: \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\") " pod="kube-system/cilium-6ls5n" Mar 25 01:16:47.752470 kubelet[3231]: I0325 01:16:47.750839 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk282\" (UniqueName: \"kubernetes.io/projected/0969260e-64dc-4bd0-bf5c-fdf82c8250da-kube-api-access-gk282\") pod \"cilium-6ls5n\" (UID: \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\") " pod="kube-system/cilium-6ls5n" Mar 25 01:16:47.752470 kubelet[3231]: I0325 01:16:47.750884 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0969260e-64dc-4bd0-bf5c-fdf82c8250da-xtables-lock\") pod \"cilium-6ls5n\" (UID: \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\") " pod="kube-system/cilium-6ls5n" Mar 25 01:16:47.752836 kubelet[3231]: I0325 01:16:47.750918 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0969260e-64dc-4bd0-bf5c-fdf82c8250da-hubble-tls\") pod \"cilium-6ls5n\" (UID: \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\") " pod="kube-system/cilium-6ls5n" Mar 25 01:16:47.752836 kubelet[3231]: I0325 01:16:47.750951 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/149e2e56-36ec-48b6-893a-4b824455c536-lib-modules\") pod \"kube-proxy-njj4b\" (UID: \"149e2e56-36ec-48b6-893a-4b824455c536\") " pod="kube-system/kube-proxy-njj4b" Mar 25 01:16:47.752836 kubelet[3231]: I0325 01:16:47.750984 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0969260e-64dc-4bd0-bf5c-fdf82c8250da-cilium-run\") pod \"cilium-6ls5n\" (UID: \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\") " pod="kube-system/cilium-6ls5n" Mar 25 01:16:47.752836 kubelet[3231]: I0325 01:16:47.751017 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0969260e-64dc-4bd0-bf5c-fdf82c8250da-bpf-maps\") pod \"cilium-6ls5n\" (UID: \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\") " pod="kube-system/cilium-6ls5n" Mar 25 01:16:47.752836 kubelet[3231]: I0325 01:16:47.751050 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0969260e-64dc-4bd0-bf5c-fdf82c8250da-cilium-cgroup\") pod \"cilium-6ls5n\" (UID: \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\") " pod="kube-system/cilium-6ls5n" Mar 25 01:16:47.752836 kubelet[3231]: I0325 01:16:47.751083 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0969260e-64dc-4bd0-bf5c-fdf82c8250da-etc-cni-netd\") pod \"cilium-6ls5n\" (UID: \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\") " pod="kube-system/cilium-6ls5n" Mar 25 01:16:47.753129 kubelet[3231]: I0325 01:16:47.751118 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0969260e-64dc-4bd0-bf5c-fdf82c8250da-lib-modules\") pod \"cilium-6ls5n\" (UID: \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\") " pod="kube-system/cilium-6ls5n" Mar 25 01:16:47.753129 kubelet[3231]: I0325 01:16:47.751157 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0969260e-64dc-4bd0-bf5c-fdf82c8250da-host-proc-sys-net\") pod \"cilium-6ls5n\" (UID: \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\") " pod="kube-system/cilium-6ls5n" Mar 25 01:16:47.753129 kubelet[3231]: I0325 01:16:47.751191 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw8mq\" (UniqueName: \"kubernetes.io/projected/149e2e56-36ec-48b6-893a-4b824455c536-kube-api-access-xw8mq\") pod \"kube-proxy-njj4b\" (UID: \"149e2e56-36ec-48b6-893a-4b824455c536\") " pod="kube-system/kube-proxy-njj4b" Mar 25 01:16:47.753129 kubelet[3231]: I0325 01:16:47.751281 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0969260e-64dc-4bd0-bf5c-fdf82c8250da-hostproc\") pod \"cilium-6ls5n\" (UID: \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\") " pod="kube-system/cilium-6ls5n" Mar 25 01:16:47.753129 kubelet[3231]: I0325 01:16:47.751321 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0969260e-64dc-4bd0-bf5c-fdf82c8250da-cni-path\") pod \"cilium-6ls5n\" (UID: \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\") " pod="kube-system/cilium-6ls5n" Mar 25 01:16:47.753129 kubelet[3231]: I0325 01:16:47.751354 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0969260e-64dc-4bd0-bf5c-fdf82c8250da-host-proc-sys-kernel\") pod \"cilium-6ls5n\" (UID: \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\") " pod="kube-system/cilium-6ls5n" Mar 25 01:16:47.753587 kubelet[3231]: I0325 01:16:47.751402 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/149e2e56-36ec-48b6-893a-4b824455c536-kube-proxy\") pod \"kube-proxy-njj4b\" (UID: \"149e2e56-36ec-48b6-893a-4b824455c536\") " pod="kube-system/kube-proxy-njj4b" Mar 25 01:16:48.030637 containerd[1952]: time="2025-03-25T01:16:48.030180612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-njj4b,Uid:149e2e56-36ec-48b6-893a-4b824455c536,Namespace:kube-system,Attempt:0,}" Mar 25 01:16:48.056242 containerd[1952]: time="2025-03-25T01:16:48.055683986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6ls5n,Uid:0969260e-64dc-4bd0-bf5c-fdf82c8250da,Namespace:kube-system,Attempt:0,}" Mar 25 01:16:48.108500 containerd[1952]: time="2025-03-25T01:16:48.107624655Z" level=info msg="connecting to shim 90950e280d2e540b3349dba9b68467bc65cceb301d3f1d17afb3255363c73548" address="unix:///run/containerd/s/f4016917ce34a26b42c160d08c846b6d0f31ba76434fdf4833a5359481430e33" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:16:48.118313 systemd[1]: Created slice kubepods-besteffort-pod33fcaa15_597f_4e8a_ba47_2c25cdf67270.slice - libcontainer container kubepods-besteffort-pod33fcaa15_597f_4e8a_ba47_2c25cdf67270.slice. Mar 25 01:16:48.151947 containerd[1952]: time="2025-03-25T01:16:48.151889085Z" level=info msg="connecting to shim 6e0ac1efd4742bde33d676b94667f6daf9ea5292b401b3846ef47a5f51342946" address="unix:///run/containerd/s/b251f04a7c3a3e21e4ced82cc20cdbf93909cacc96cfa8629be3c53851816738" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:16:48.154635 kubelet[3231]: I0325 01:16:48.154476 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/33fcaa15-597f-4e8a-ba47-2c25cdf67270-cilium-config-path\") pod \"cilium-operator-5d85765b45-dcwdh\" (UID: \"33fcaa15-597f-4e8a-ba47-2c25cdf67270\") " pod="kube-system/cilium-operator-5d85765b45-dcwdh" Mar 25 01:16:48.154635 kubelet[3231]: I0325 01:16:48.154551 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d97d\" (UniqueName: \"kubernetes.io/projected/33fcaa15-597f-4e8a-ba47-2c25cdf67270-kube-api-access-7d97d\") pod \"cilium-operator-5d85765b45-dcwdh\" (UID: \"33fcaa15-597f-4e8a-ba47-2c25cdf67270\") " pod="kube-system/cilium-operator-5d85765b45-dcwdh" Mar 25 01:16:48.189535 systemd[1]: Started cri-containerd-90950e280d2e540b3349dba9b68467bc65cceb301d3f1d17afb3255363c73548.scope - libcontainer container 90950e280d2e540b3349dba9b68467bc65cceb301d3f1d17afb3255363c73548. Mar 25 01:16:48.223494 systemd[1]: Started cri-containerd-6e0ac1efd4742bde33d676b94667f6daf9ea5292b401b3846ef47a5f51342946.scope - libcontainer container 6e0ac1efd4742bde33d676b94667f6daf9ea5292b401b3846ef47a5f51342946. Mar 25 01:16:48.285303 containerd[1952]: time="2025-03-25T01:16:48.284281637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-njj4b,Uid:149e2e56-36ec-48b6-893a-4b824455c536,Namespace:kube-system,Attempt:0,} returns sandbox id \"90950e280d2e540b3349dba9b68467bc65cceb301d3f1d17afb3255363c73548\"" Mar 25 01:16:48.294485 containerd[1952]: time="2025-03-25T01:16:48.294415782Z" level=info msg="CreateContainer within sandbox \"90950e280d2e540b3349dba9b68467bc65cceb301d3f1d17afb3255363c73548\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 25 01:16:48.322698 containerd[1952]: time="2025-03-25T01:16:48.321162737Z" level=info msg="Container ca31f78fda6a6995509751bc5cb2841ffb1691db2349b56fd68e8a99463ae25a: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:16:48.322698 containerd[1952]: time="2025-03-25T01:16:48.321618244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6ls5n,Uid:0969260e-64dc-4bd0-bf5c-fdf82c8250da,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e0ac1efd4742bde33d676b94667f6daf9ea5292b401b3846ef47a5f51342946\"" Mar 25 01:16:48.326176 containerd[1952]: time="2025-03-25T01:16:48.326109125Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 25 01:16:48.343780 containerd[1952]: time="2025-03-25T01:16:48.343717785Z" level=info msg="CreateContainer within sandbox \"90950e280d2e540b3349dba9b68467bc65cceb301d3f1d17afb3255363c73548\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ca31f78fda6a6995509751bc5cb2841ffb1691db2349b56fd68e8a99463ae25a\"" Mar 25 01:16:48.345268 containerd[1952]: time="2025-03-25T01:16:48.345186202Z" level=info msg="StartContainer for \"ca31f78fda6a6995509751bc5cb2841ffb1691db2349b56fd68e8a99463ae25a\"" Mar 25 01:16:48.348831 containerd[1952]: time="2025-03-25T01:16:48.348768433Z" level=info msg="connecting to shim ca31f78fda6a6995509751bc5cb2841ffb1691db2349b56fd68e8a99463ae25a" address="unix:///run/containerd/s/f4016917ce34a26b42c160d08c846b6d0f31ba76434fdf4833a5359481430e33" protocol=ttrpc version=3 Mar 25 01:16:48.386496 systemd[1]: Started cri-containerd-ca31f78fda6a6995509751bc5cb2841ffb1691db2349b56fd68e8a99463ae25a.scope - libcontainer container ca31f78fda6a6995509751bc5cb2841ffb1691db2349b56fd68e8a99463ae25a. Mar 25 01:16:48.430318 containerd[1952]: time="2025-03-25T01:16:48.429864197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-dcwdh,Uid:33fcaa15-597f-4e8a-ba47-2c25cdf67270,Namespace:kube-system,Attempt:0,}" Mar 25 01:16:48.484195 containerd[1952]: time="2025-03-25T01:16:48.484149458Z" level=info msg="StartContainer for \"ca31f78fda6a6995509751bc5cb2841ffb1691db2349b56fd68e8a99463ae25a\" returns successfully" Mar 25 01:16:48.498160 containerd[1952]: time="2025-03-25T01:16:48.498095880Z" level=info msg="connecting to shim 04d71d725d6fd78e94d33ff987ae2b3091111719e0191d0dd751ef843fe09bb3" address="unix:///run/containerd/s/fab1cff6631d6ebc0e0338251587b5a6adfe64319a7fd7b3aa130136b93d5651" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:16:48.549587 systemd[1]: Started cri-containerd-04d71d725d6fd78e94d33ff987ae2b3091111719e0191d0dd751ef843fe09bb3.scope - libcontainer container 04d71d725d6fd78e94d33ff987ae2b3091111719e0191d0dd751ef843fe09bb3. Mar 25 01:16:48.648522 containerd[1952]: time="2025-03-25T01:16:48.648329709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-dcwdh,Uid:33fcaa15-597f-4e8a-ba47-2c25cdf67270,Namespace:kube-system,Attempt:0,} returns sandbox id \"04d71d725d6fd78e94d33ff987ae2b3091111719e0191d0dd751ef843fe09bb3\"" Mar 25 01:16:49.521603 kubelet[3231]: I0325 01:16:49.520887 3231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-njj4b" podStartSLOduration=2.5208640239999998 podStartE2EDuration="2.520864024s" podCreationTimestamp="2025-03-25 01:16:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:16:48.871002963 +0000 UTC m=+7.371150397" watchObservedRunningTime="2025-03-25 01:16:49.520864024 +0000 UTC m=+8.021011434" Mar 25 01:16:54.468538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount784800958.mount: Deactivated successfully. Mar 25 01:16:56.941505 containerd[1952]: time="2025-03-25T01:16:56.941441598Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:16:56.944733 containerd[1952]: time="2025-03-25T01:16:56.944604711Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 25 01:16:56.946911 containerd[1952]: time="2025-03-25T01:16:56.946823900Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:16:56.952224 containerd[1952]: time="2025-03-25T01:16:56.952128175Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.625947098s" Mar 25 01:16:56.952224 containerd[1952]: time="2025-03-25T01:16:56.952194976Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 25 01:16:56.956361 containerd[1952]: time="2025-03-25T01:16:56.956290979Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 25 01:16:56.958484 containerd[1952]: time="2025-03-25T01:16:56.957667815Z" level=info msg="CreateContainer within sandbox \"6e0ac1efd4742bde33d676b94667f6daf9ea5292b401b3846ef47a5f51342946\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 25 01:16:56.974543 containerd[1952]: time="2025-03-25T01:16:56.974477200Z" level=info msg="Container fdff90c58d3476db8c6e1e5aefe7609224803961a1de23e0fa383d45be1e5638: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:16:56.989682 containerd[1952]: time="2025-03-25T01:16:56.989581624Z" level=info msg="CreateContainer within sandbox \"6e0ac1efd4742bde33d676b94667f6daf9ea5292b401b3846ef47a5f51342946\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fdff90c58d3476db8c6e1e5aefe7609224803961a1de23e0fa383d45be1e5638\"" Mar 25 01:16:56.990682 containerd[1952]: time="2025-03-25T01:16:56.990436572Z" level=info msg="StartContainer for \"fdff90c58d3476db8c6e1e5aefe7609224803961a1de23e0fa383d45be1e5638\"" Mar 25 01:16:56.992735 containerd[1952]: time="2025-03-25T01:16:56.992450686Z" level=info msg="connecting to shim fdff90c58d3476db8c6e1e5aefe7609224803961a1de23e0fa383d45be1e5638" address="unix:///run/containerd/s/b251f04a7c3a3e21e4ced82cc20cdbf93909cacc96cfa8629be3c53851816738" protocol=ttrpc version=3 Mar 25 01:16:57.033518 systemd[1]: Started cri-containerd-fdff90c58d3476db8c6e1e5aefe7609224803961a1de23e0fa383d45be1e5638.scope - libcontainer container fdff90c58d3476db8c6e1e5aefe7609224803961a1de23e0fa383d45be1e5638. Mar 25 01:16:57.092436 containerd[1952]: time="2025-03-25T01:16:57.091001253Z" level=info msg="StartContainer for \"fdff90c58d3476db8c6e1e5aefe7609224803961a1de23e0fa383d45be1e5638\" returns successfully" Mar 25 01:16:57.117806 systemd[1]: cri-containerd-fdff90c58d3476db8c6e1e5aefe7609224803961a1de23e0fa383d45be1e5638.scope: Deactivated successfully. Mar 25 01:16:57.118719 systemd[1]: cri-containerd-fdff90c58d3476db8c6e1e5aefe7609224803961a1de23e0fa383d45be1e5638.scope: Consumed 42ms CPU time, 6.4M memory peak, 3.1M written to disk. Mar 25 01:16:57.122663 containerd[1952]: time="2025-03-25T01:16:57.121359001Z" level=info msg="received exit event container_id:\"fdff90c58d3476db8c6e1e5aefe7609224803961a1de23e0fa383d45be1e5638\" id:\"fdff90c58d3476db8c6e1e5aefe7609224803961a1de23e0fa383d45be1e5638\" pid:3735 exited_at:{seconds:1742865417 nanos:119667188}" Mar 25 01:16:57.123349 containerd[1952]: time="2025-03-25T01:16:57.122855957Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fdff90c58d3476db8c6e1e5aefe7609224803961a1de23e0fa383d45be1e5638\" id:\"fdff90c58d3476db8c6e1e5aefe7609224803961a1de23e0fa383d45be1e5638\" pid:3735 exited_at:{seconds:1742865417 nanos:119667188}" Mar 25 01:16:57.175149 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fdff90c58d3476db8c6e1e5aefe7609224803961a1de23e0fa383d45be1e5638-rootfs.mount: Deactivated successfully. Mar 25 01:16:58.894592 containerd[1952]: time="2025-03-25T01:16:58.894376219Z" level=info msg="CreateContainer within sandbox \"6e0ac1efd4742bde33d676b94667f6daf9ea5292b401b3846ef47a5f51342946\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 25 01:16:58.915255 containerd[1952]: time="2025-03-25T01:16:58.915146563Z" level=info msg="Container bb7c7f7624aee87dbf4f2efe5ecf8fc97eb916f6d082116ea3f83b05738909c3: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:16:58.940137 containerd[1952]: time="2025-03-25T01:16:58.939904796Z" level=info msg="CreateContainer within sandbox \"6e0ac1efd4742bde33d676b94667f6daf9ea5292b401b3846ef47a5f51342946\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bb7c7f7624aee87dbf4f2efe5ecf8fc97eb916f6d082116ea3f83b05738909c3\"" Mar 25 01:16:58.942180 containerd[1952]: time="2025-03-25T01:16:58.941895823Z" level=info msg="StartContainer for \"bb7c7f7624aee87dbf4f2efe5ecf8fc97eb916f6d082116ea3f83b05738909c3\"" Mar 25 01:16:58.945824 containerd[1952]: time="2025-03-25T01:16:58.945465135Z" level=info msg="connecting to shim bb7c7f7624aee87dbf4f2efe5ecf8fc97eb916f6d082116ea3f83b05738909c3" address="unix:///run/containerd/s/b251f04a7c3a3e21e4ced82cc20cdbf93909cacc96cfa8629be3c53851816738" protocol=ttrpc version=3 Mar 25 01:16:58.979505 systemd[1]: Started cri-containerd-bb7c7f7624aee87dbf4f2efe5ecf8fc97eb916f6d082116ea3f83b05738909c3.scope - libcontainer container bb7c7f7624aee87dbf4f2efe5ecf8fc97eb916f6d082116ea3f83b05738909c3. Mar 25 01:16:59.046695 containerd[1952]: time="2025-03-25T01:16:59.045029225Z" level=info msg="StartContainer for \"bb7c7f7624aee87dbf4f2efe5ecf8fc97eb916f6d082116ea3f83b05738909c3\" returns successfully" Mar 25 01:16:59.067872 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 25 01:16:59.068887 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:16:59.070412 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 25 01:16:59.079267 containerd[1952]: time="2025-03-25T01:16:59.077591094Z" level=info msg="received exit event container_id:\"bb7c7f7624aee87dbf4f2efe5ecf8fc97eb916f6d082116ea3f83b05738909c3\" id:\"bb7c7f7624aee87dbf4f2efe5ecf8fc97eb916f6d082116ea3f83b05738909c3\" pid:3782 exited_at:{seconds:1742865419 nanos:77258672}" Mar 25 01:16:59.078377 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 25 01:16:59.080866 containerd[1952]: time="2025-03-25T01:16:59.080791942Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb7c7f7624aee87dbf4f2efe5ecf8fc97eb916f6d082116ea3f83b05738909c3\" id:\"bb7c7f7624aee87dbf4f2efe5ecf8fc97eb916f6d082116ea3f83b05738909c3\" pid:3782 exited_at:{seconds:1742865419 nanos:77258672}" Mar 25 01:16:59.084376 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 25 01:16:59.085387 systemd[1]: cri-containerd-bb7c7f7624aee87dbf4f2efe5ecf8fc97eb916f6d082116ea3f83b05738909c3.scope: Deactivated successfully. Mar 25 01:16:59.139289 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:16:59.903430 containerd[1952]: time="2025-03-25T01:16:59.903359397Z" level=info msg="CreateContainer within sandbox \"6e0ac1efd4742bde33d676b94667f6daf9ea5292b401b3846ef47a5f51342946\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 25 01:16:59.920836 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb7c7f7624aee87dbf4f2efe5ecf8fc97eb916f6d082116ea3f83b05738909c3-rootfs.mount: Deactivated successfully. Mar 25 01:16:59.946295 containerd[1952]: time="2025-03-25T01:16:59.945710575Z" level=info msg="Container 7238cba5797a51305268e93b4ac2451be285f8d148307c5bb2ee2ce0917a2f4d: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:16:59.962398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2316699269.mount: Deactivated successfully. Mar 25 01:16:59.989025 containerd[1952]: time="2025-03-25T01:16:59.988964507Z" level=info msg="CreateContainer within sandbox \"6e0ac1efd4742bde33d676b94667f6daf9ea5292b401b3846ef47a5f51342946\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7238cba5797a51305268e93b4ac2451be285f8d148307c5bb2ee2ce0917a2f4d\"" Mar 25 01:16:59.992929 containerd[1952]: time="2025-03-25T01:16:59.992505341Z" level=info msg="StartContainer for \"7238cba5797a51305268e93b4ac2451be285f8d148307c5bb2ee2ce0917a2f4d\"" Mar 25 01:17:00.010826 containerd[1952]: time="2025-03-25T01:17:00.009155622Z" level=info msg="connecting to shim 7238cba5797a51305268e93b4ac2451be285f8d148307c5bb2ee2ce0917a2f4d" address="unix:///run/containerd/s/b251f04a7c3a3e21e4ced82cc20cdbf93909cacc96cfa8629be3c53851816738" protocol=ttrpc version=3 Mar 25 01:17:00.062568 systemd[1]: Started cri-containerd-7238cba5797a51305268e93b4ac2451be285f8d148307c5bb2ee2ce0917a2f4d.scope - libcontainer container 7238cba5797a51305268e93b4ac2451be285f8d148307c5bb2ee2ce0917a2f4d. Mar 25 01:17:00.177469 systemd[1]: cri-containerd-7238cba5797a51305268e93b4ac2451be285f8d148307c5bb2ee2ce0917a2f4d.scope: Deactivated successfully. Mar 25 01:17:00.180597 containerd[1952]: time="2025-03-25T01:17:00.180178860Z" level=info msg="StartContainer for \"7238cba5797a51305268e93b4ac2451be285f8d148307c5bb2ee2ce0917a2f4d\" returns successfully" Mar 25 01:17:00.185423 containerd[1952]: time="2025-03-25T01:17:00.185358921Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7238cba5797a51305268e93b4ac2451be285f8d148307c5bb2ee2ce0917a2f4d\" id:\"7238cba5797a51305268e93b4ac2451be285f8d148307c5bb2ee2ce0917a2f4d\" pid:3841 exited_at:{seconds:1742865420 nanos:184461076}" Mar 25 01:17:00.185629 containerd[1952]: time="2025-03-25T01:17:00.185576025Z" level=info msg="received exit event container_id:\"7238cba5797a51305268e93b4ac2451be285f8d148307c5bb2ee2ce0917a2f4d\" id:\"7238cba5797a51305268e93b4ac2451be285f8d148307c5bb2ee2ce0917a2f4d\" pid:3841 exited_at:{seconds:1742865420 nanos:184461076}" Mar 25 01:17:00.241938 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7238cba5797a51305268e93b4ac2451be285f8d148307c5bb2ee2ce0917a2f4d-rootfs.mount: Deactivated successfully. Mar 25 01:17:00.375540 containerd[1952]: time="2025-03-25T01:17:00.375476585Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:00.376905 containerd[1952]: time="2025-03-25T01:17:00.376828953Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 25 01:17:00.377722 containerd[1952]: time="2025-03-25T01:17:00.377630378Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:17:00.381161 containerd[1952]: time="2025-03-25T01:17:00.380247434Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.423892931s" Mar 25 01:17:00.381161 containerd[1952]: time="2025-03-25T01:17:00.380312902Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 25 01:17:00.385882 containerd[1952]: time="2025-03-25T01:17:00.385599937Z" level=info msg="CreateContainer within sandbox \"04d71d725d6fd78e94d33ff987ae2b3091111719e0191d0dd751ef843fe09bb3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 25 01:17:00.395984 containerd[1952]: time="2025-03-25T01:17:00.395909825Z" level=info msg="Container 8b0b70370e494cf0cf055e2212061f3444dfe1f95741263ec065a40b15180c78: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:17:00.411551 containerd[1952]: time="2025-03-25T01:17:00.411428156Z" level=info msg="CreateContainer within sandbox \"04d71d725d6fd78e94d33ff987ae2b3091111719e0191d0dd751ef843fe09bb3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8b0b70370e494cf0cf055e2212061f3444dfe1f95741263ec065a40b15180c78\"" Mar 25 01:17:00.412934 containerd[1952]: time="2025-03-25T01:17:00.412664774Z" level=info msg="StartContainer for \"8b0b70370e494cf0cf055e2212061f3444dfe1f95741263ec065a40b15180c78\"" Mar 25 01:17:00.414341 containerd[1952]: time="2025-03-25T01:17:00.414268716Z" level=info msg="connecting to shim 8b0b70370e494cf0cf055e2212061f3444dfe1f95741263ec065a40b15180c78" address="unix:///run/containerd/s/fab1cff6631d6ebc0e0338251587b5a6adfe64319a7fd7b3aa130136b93d5651" protocol=ttrpc version=3 Mar 25 01:17:00.447539 systemd[1]: Started cri-containerd-8b0b70370e494cf0cf055e2212061f3444dfe1f95741263ec065a40b15180c78.scope - libcontainer container 8b0b70370e494cf0cf055e2212061f3444dfe1f95741263ec065a40b15180c78. Mar 25 01:17:00.504525 containerd[1952]: time="2025-03-25T01:17:00.503567256Z" level=info msg="StartContainer for \"8b0b70370e494cf0cf055e2212061f3444dfe1f95741263ec065a40b15180c78\" returns successfully" Mar 25 01:17:00.931257 containerd[1952]: time="2025-03-25T01:17:00.931187122Z" level=info msg="CreateContainer within sandbox \"6e0ac1efd4742bde33d676b94667f6daf9ea5292b401b3846ef47a5f51342946\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 25 01:17:00.962922 containerd[1952]: time="2025-03-25T01:17:00.959828011Z" level=info msg="Container 812f9149f810ab1d866d5988a5a1b5a36b2481d81c229a616b28d73714371dd8: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:17:00.989805 containerd[1952]: time="2025-03-25T01:17:00.989601499Z" level=info msg="CreateContainer within sandbox \"6e0ac1efd4742bde33d676b94667f6daf9ea5292b401b3846ef47a5f51342946\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"812f9149f810ab1d866d5988a5a1b5a36b2481d81c229a616b28d73714371dd8\"" Mar 25 01:17:00.990691 containerd[1952]: time="2025-03-25T01:17:00.990622153Z" level=info msg="StartContainer for \"812f9149f810ab1d866d5988a5a1b5a36b2481d81c229a616b28d73714371dd8\"" Mar 25 01:17:00.994522 containerd[1952]: time="2025-03-25T01:17:00.992170542Z" level=info msg="connecting to shim 812f9149f810ab1d866d5988a5a1b5a36b2481d81c229a616b28d73714371dd8" address="unix:///run/containerd/s/b251f04a7c3a3e21e4ced82cc20cdbf93909cacc96cfa8629be3c53851816738" protocol=ttrpc version=3 Mar 25 01:17:01.040720 kubelet[3231]: I0325 01:17:01.040328 3231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-dcwdh" podStartSLOduration=1.309401829 podStartE2EDuration="13.040302069s" podCreationTimestamp="2025-03-25 01:16:48 +0000 UTC" firstStartedPulling="2025-03-25 01:16:48.650462611 +0000 UTC m=+7.150610021" lastFinishedPulling="2025-03-25 01:17:00.381362851 +0000 UTC m=+18.881510261" observedRunningTime="2025-03-25 01:17:00.952878002 +0000 UTC m=+19.453025664" watchObservedRunningTime="2025-03-25 01:17:01.040302069 +0000 UTC m=+19.540449515" Mar 25 01:17:01.074531 systemd[1]: Started cri-containerd-812f9149f810ab1d866d5988a5a1b5a36b2481d81c229a616b28d73714371dd8.scope - libcontainer container 812f9149f810ab1d866d5988a5a1b5a36b2481d81c229a616b28d73714371dd8. Mar 25 01:17:01.195636 systemd[1]: cri-containerd-812f9149f810ab1d866d5988a5a1b5a36b2481d81c229a616b28d73714371dd8.scope: Deactivated successfully. Mar 25 01:17:01.199282 containerd[1952]: time="2025-03-25T01:17:01.198923961Z" level=info msg="received exit event container_id:\"812f9149f810ab1d866d5988a5a1b5a36b2481d81c229a616b28d73714371dd8\" id:\"812f9149f810ab1d866d5988a5a1b5a36b2481d81c229a616b28d73714371dd8\" pid:3918 exited_at:{seconds:1742865421 nanos:198062350}" Mar 25 01:17:01.205004 containerd[1952]: time="2025-03-25T01:17:01.204319121Z" level=info msg="TaskExit event in podsandbox handler container_id:\"812f9149f810ab1d866d5988a5a1b5a36b2481d81c229a616b28d73714371dd8\" id:\"812f9149f810ab1d866d5988a5a1b5a36b2481d81c229a616b28d73714371dd8\" pid:3918 exited_at:{seconds:1742865421 nanos:198062350}" Mar 25 01:17:01.210811 containerd[1952]: time="2025-03-25T01:17:01.210749811Z" level=info msg="StartContainer for \"812f9149f810ab1d866d5988a5a1b5a36b2481d81c229a616b28d73714371dd8\" returns successfully" Mar 25 01:17:01.273539 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-812f9149f810ab1d866d5988a5a1b5a36b2481d81c229a616b28d73714371dd8-rootfs.mount: Deactivated successfully. Mar 25 01:17:01.952318 containerd[1952]: time="2025-03-25T01:17:01.952153477Z" level=info msg="CreateContainer within sandbox \"6e0ac1efd4742bde33d676b94667f6daf9ea5292b401b3846ef47a5f51342946\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 25 01:17:01.992194 containerd[1952]: time="2025-03-25T01:17:01.984842801Z" level=info msg="Container 6390845a1c19b7f50fdec224b0f97d27ba4786baac7ee89aff690277a90c6f22: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:17:01.990999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2331054679.mount: Deactivated successfully. Mar 25 01:17:02.013999 containerd[1952]: time="2025-03-25T01:17:02.013804996Z" level=info msg="CreateContainer within sandbox \"6e0ac1efd4742bde33d676b94667f6daf9ea5292b401b3846ef47a5f51342946\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6390845a1c19b7f50fdec224b0f97d27ba4786baac7ee89aff690277a90c6f22\"" Mar 25 01:17:02.024276 containerd[1952]: time="2025-03-25T01:17:02.020783315Z" level=info msg="StartContainer for \"6390845a1c19b7f50fdec224b0f97d27ba4786baac7ee89aff690277a90c6f22\"" Mar 25 01:17:02.026911 containerd[1952]: time="2025-03-25T01:17:02.026847173Z" level=info msg="connecting to shim 6390845a1c19b7f50fdec224b0f97d27ba4786baac7ee89aff690277a90c6f22" address="unix:///run/containerd/s/b251f04a7c3a3e21e4ced82cc20cdbf93909cacc96cfa8629be3c53851816738" protocol=ttrpc version=3 Mar 25 01:17:02.091568 systemd[1]: Started cri-containerd-6390845a1c19b7f50fdec224b0f97d27ba4786baac7ee89aff690277a90c6f22.scope - libcontainer container 6390845a1c19b7f50fdec224b0f97d27ba4786baac7ee89aff690277a90c6f22. Mar 25 01:17:02.235468 containerd[1952]: time="2025-03-25T01:17:02.235416680Z" level=info msg="StartContainer for \"6390845a1c19b7f50fdec224b0f97d27ba4786baac7ee89aff690277a90c6f22\" returns successfully" Mar 25 01:17:02.413886 containerd[1952]: time="2025-03-25T01:17:02.413218337Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6390845a1c19b7f50fdec224b0f97d27ba4786baac7ee89aff690277a90c6f22\" id:\"4a7eb7e30b0e132927ff6506797584997a32c32d63bb092aaa3c7fdd6e7428af\" pid:3984 exited_at:{seconds:1742865422 nanos:412648809}" Mar 25 01:17:02.492563 kubelet[3231]: I0325 01:17:02.492016 3231 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 25 01:17:02.559643 systemd[1]: Created slice kubepods-burstable-pod717e9d43_6efc_4080_95fa_36396cfcf187.slice - libcontainer container kubepods-burstable-pod717e9d43_6efc_4080_95fa_36396cfcf187.slice. Mar 25 01:17:02.569790 kubelet[3231]: I0325 01:17:02.569693 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg856\" (UniqueName: \"kubernetes.io/projected/717e9d43-6efc-4080-95fa-36396cfcf187-kube-api-access-xg856\") pod \"coredns-6f6b679f8f-cz5xn\" (UID: \"717e9d43-6efc-4080-95fa-36396cfcf187\") " pod="kube-system/coredns-6f6b679f8f-cz5xn" Mar 25 01:17:02.570700 kubelet[3231]: I0325 01:17:02.570603 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/717e9d43-6efc-4080-95fa-36396cfcf187-config-volume\") pod \"coredns-6f6b679f8f-cz5xn\" (UID: \"717e9d43-6efc-4080-95fa-36396cfcf187\") " pod="kube-system/coredns-6f6b679f8f-cz5xn" Mar 25 01:17:02.583985 systemd[1]: Created slice kubepods-burstable-poddb475e4d_c3d4_49a7_bde5_6879ed188281.slice - libcontainer container kubepods-burstable-poddb475e4d_c3d4_49a7_bde5_6879ed188281.slice. Mar 25 01:17:02.670921 kubelet[3231]: I0325 01:17:02.670836 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s8s4\" (UniqueName: \"kubernetes.io/projected/db475e4d-c3d4-49a7-bde5-6879ed188281-kube-api-access-9s8s4\") pod \"coredns-6f6b679f8f-tphhc\" (UID: \"db475e4d-c3d4-49a7-bde5-6879ed188281\") " pod="kube-system/coredns-6f6b679f8f-tphhc" Mar 25 01:17:02.672231 kubelet[3231]: I0325 01:17:02.671190 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db475e4d-c3d4-49a7-bde5-6879ed188281-config-volume\") pod \"coredns-6f6b679f8f-tphhc\" (UID: \"db475e4d-c3d4-49a7-bde5-6879ed188281\") " pod="kube-system/coredns-6f6b679f8f-tphhc" Mar 25 01:17:02.871709 containerd[1952]: time="2025-03-25T01:17:02.871460155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-cz5xn,Uid:717e9d43-6efc-4080-95fa-36396cfcf187,Namespace:kube-system,Attempt:0,}" Mar 25 01:17:02.894254 containerd[1952]: time="2025-03-25T01:17:02.894182807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-tphhc,Uid:db475e4d-c3d4-49a7-bde5-6879ed188281,Namespace:kube-system,Attempt:0,}" Mar 25 01:17:03.093194 kubelet[3231]: I0325 01:17:03.092973 3231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6ls5n" podStartSLOduration=7.463386156 podStartE2EDuration="16.092952331s" podCreationTimestamp="2025-03-25 01:16:47 +0000 UTC" firstStartedPulling="2025-03-25 01:16:48.32415474 +0000 UTC m=+6.824302150" lastFinishedPulling="2025-03-25 01:16:56.953720915 +0000 UTC m=+15.453868325" observedRunningTime="2025-03-25 01:17:03.092281424 +0000 UTC m=+21.592428846" watchObservedRunningTime="2025-03-25 01:17:03.092952331 +0000 UTC m=+21.593099741" Mar 25 01:17:05.297147 (udev-worker)[4090]: Network interface NamePolicy= disabled on kernel command line. Mar 25 01:17:05.298158 systemd-networkd[1856]: cilium_host: Link UP Mar 25 01:17:05.299725 systemd-networkd[1856]: cilium_net: Link UP Mar 25 01:17:05.300089 systemd-networkd[1856]: cilium_net: Gained carrier Mar 25 01:17:05.300481 systemd-networkd[1856]: cilium_host: Gained carrier Mar 25 01:17:05.302018 (udev-worker)[4046]: Network interface NamePolicy= disabled on kernel command line. Mar 25 01:17:05.358706 systemd-networkd[1856]: cilium_host: Gained IPv6LL Mar 25 01:17:05.462842 (udev-worker)[4100]: Network interface NamePolicy= disabled on kernel command line. Mar 25 01:17:05.479399 systemd-networkd[1856]: cilium_vxlan: Link UP Mar 25 01:17:05.479414 systemd-networkd[1856]: cilium_vxlan: Gained carrier Mar 25 01:17:05.961299 kernel: NET: Registered PF_ALG protocol family Mar 25 01:17:06.037492 systemd-networkd[1856]: cilium_net: Gained IPv6LL Mar 25 01:17:07.062526 systemd-networkd[1856]: cilium_vxlan: Gained IPv6LL Mar 25 01:17:07.250363 (udev-worker)[4101]: Network interface NamePolicy= disabled on kernel command line. Mar 25 01:17:07.251579 systemd-networkd[1856]: lxc_health: Link UP Mar 25 01:17:07.268243 systemd-networkd[1856]: lxc_health: Gained carrier Mar 25 01:17:07.938437 systemd-networkd[1856]: lxc25d57a5aee8b: Link UP Mar 25 01:17:07.946446 kernel: eth0: renamed from tmp4a3c8 Mar 25 01:17:07.955177 systemd-networkd[1856]: lxc25d57a5aee8b: Gained carrier Mar 25 01:17:07.996384 (udev-worker)[4431]: Network interface NamePolicy= disabled on kernel command line. Mar 25 01:17:07.998264 kernel: eth0: renamed from tmpfd7f6 Mar 25 01:17:08.004895 systemd-networkd[1856]: lxcc10cc3eab10e: Link UP Mar 25 01:17:08.005530 systemd-networkd[1856]: lxcc10cc3eab10e: Gained carrier Mar 25 01:17:09.237503 systemd-networkd[1856]: lxc_health: Gained IPv6LL Mar 25 01:17:09.941949 systemd-networkd[1856]: lxcc10cc3eab10e: Gained IPv6LL Mar 25 01:17:10.006308 systemd-networkd[1856]: lxc25d57a5aee8b: Gained IPv6LL Mar 25 01:17:12.254409 ntpd[1929]: Listen normally on 7 cilium_host 192.168.0.21:123 Mar 25 01:17:12.255530 ntpd[1929]: Listen normally on 8 cilium_net [fe80::b02f:75ff:fee4:62af%4]:123 Mar 25 01:17:12.257237 ntpd[1929]: 25 Mar 01:17:12 ntpd[1929]: Listen normally on 7 cilium_host 192.168.0.21:123 Mar 25 01:17:12.257237 ntpd[1929]: 25 Mar 01:17:12 ntpd[1929]: Listen normally on 8 cilium_net [fe80::b02f:75ff:fee4:62af%4]:123 Mar 25 01:17:12.257237 ntpd[1929]: 25 Mar 01:17:12 ntpd[1929]: Listen normally on 9 cilium_host [fe80::9497:3bff:fe18:5683%5]:123 Mar 25 01:17:12.257237 ntpd[1929]: 25 Mar 01:17:12 ntpd[1929]: Listen normally on 10 cilium_vxlan [fe80::6c7d:faff:fe7d:9964%6]:123 Mar 25 01:17:12.257237 ntpd[1929]: 25 Mar 01:17:12 ntpd[1929]: Listen normally on 11 lxc_health [fe80::2c1b:91ff:fee3:78de%8]:123 Mar 25 01:17:12.257237 ntpd[1929]: 25 Mar 01:17:12 ntpd[1929]: Listen normally on 12 lxc25d57a5aee8b [fe80::33:84ff:fef9:9dcd%10]:123 Mar 25 01:17:12.257237 ntpd[1929]: 25 Mar 01:17:12 ntpd[1929]: Listen normally on 13 lxcc10cc3eab10e [fe80::f42a:14ff:fee4:c3ef%12]:123 Mar 25 01:17:12.256305 ntpd[1929]: Listen normally on 9 cilium_host [fe80::9497:3bff:fe18:5683%5]:123 Mar 25 01:17:12.256435 ntpd[1929]: Listen normally on 10 cilium_vxlan [fe80::6c7d:faff:fe7d:9964%6]:123 Mar 25 01:17:12.256505 ntpd[1929]: Listen normally on 11 lxc_health [fe80::2c1b:91ff:fee3:78de%8]:123 Mar 25 01:17:12.256571 ntpd[1929]: Listen normally on 12 lxc25d57a5aee8b [fe80::33:84ff:fef9:9dcd%10]:123 Mar 25 01:17:12.256638 ntpd[1929]: Listen normally on 13 lxcc10cc3eab10e [fe80::f42a:14ff:fee4:c3ef%12]:123 Mar 25 01:17:16.062544 containerd[1952]: time="2025-03-25T01:17:16.062461967Z" level=info msg="connecting to shim 4a3c8e9b747aeca15fbdfeb0acf04cc44cc4ec960c53235eaac74285a98f1501" address="unix:///run/containerd/s/09068395fd8e05e4a587f9cd81e070b907c110dd23eb97c629149f185d9b3be8" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:17:16.090278 kubelet[3231]: I0325 01:17:16.090017 3231 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 25 01:17:16.135537 systemd[1]: Started cri-containerd-4a3c8e9b747aeca15fbdfeb0acf04cc44cc4ec960c53235eaac74285a98f1501.scope - libcontainer container 4a3c8e9b747aeca15fbdfeb0acf04cc44cc4ec960c53235eaac74285a98f1501. Mar 25 01:17:16.232257 containerd[1952]: time="2025-03-25T01:17:16.231539465Z" level=info msg="connecting to shim fd7f6520c27ff656b44459f10bd4beaa9cbb8ecec502704dd6f4e73055967885" address="unix:///run/containerd/s/d103b80f119bbfb1ff0c1dcac5f541bfa3cd1c997b7ec0e35c5462624420f958" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:17:16.318775 systemd[1]: Started cri-containerd-fd7f6520c27ff656b44459f10bd4beaa9cbb8ecec502704dd6f4e73055967885.scope - libcontainer container fd7f6520c27ff656b44459f10bd4beaa9cbb8ecec502704dd6f4e73055967885. Mar 25 01:17:16.329395 containerd[1952]: time="2025-03-25T01:17:16.328922629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-cz5xn,Uid:717e9d43-6efc-4080-95fa-36396cfcf187,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a3c8e9b747aeca15fbdfeb0acf04cc44cc4ec960c53235eaac74285a98f1501\"" Mar 25 01:17:16.343048 containerd[1952]: time="2025-03-25T01:17:16.342970045Z" level=info msg="CreateContainer within sandbox \"4a3c8e9b747aeca15fbdfeb0acf04cc44cc4ec960c53235eaac74285a98f1501\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 25 01:17:16.374435 containerd[1952]: time="2025-03-25T01:17:16.372451727Z" level=info msg="Container 82334fc1400e58f39dfedd586d6fcb7368f0eb7ed0c6c5f2ea268622436a6d20: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:17:16.389666 containerd[1952]: time="2025-03-25T01:17:16.389584961Z" level=info msg="CreateContainer within sandbox \"4a3c8e9b747aeca15fbdfeb0acf04cc44cc4ec960c53235eaac74285a98f1501\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"82334fc1400e58f39dfedd586d6fcb7368f0eb7ed0c6c5f2ea268622436a6d20\"" Mar 25 01:17:16.391401 containerd[1952]: time="2025-03-25T01:17:16.391339086Z" level=info msg="StartContainer for \"82334fc1400e58f39dfedd586d6fcb7368f0eb7ed0c6c5f2ea268622436a6d20\"" Mar 25 01:17:16.395566 containerd[1952]: time="2025-03-25T01:17:16.395487303Z" level=info msg="connecting to shim 82334fc1400e58f39dfedd586d6fcb7368f0eb7ed0c6c5f2ea268622436a6d20" address="unix:///run/containerd/s/09068395fd8e05e4a587f9cd81e070b907c110dd23eb97c629149f185d9b3be8" protocol=ttrpc version=3 Mar 25 01:17:16.465829 systemd[1]: Started cri-containerd-82334fc1400e58f39dfedd586d6fcb7368f0eb7ed0c6c5f2ea268622436a6d20.scope - libcontainer container 82334fc1400e58f39dfedd586d6fcb7368f0eb7ed0c6c5f2ea268622436a6d20. Mar 25 01:17:16.468627 containerd[1952]: time="2025-03-25T01:17:16.468561042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-tphhc,Uid:db475e4d-c3d4-49a7-bde5-6879ed188281,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd7f6520c27ff656b44459f10bd4beaa9cbb8ecec502704dd6f4e73055967885\"" Mar 25 01:17:16.483961 containerd[1952]: time="2025-03-25T01:17:16.483890231Z" level=info msg="CreateContainer within sandbox \"fd7f6520c27ff656b44459f10bd4beaa9cbb8ecec502704dd6f4e73055967885\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 25 01:17:16.513541 containerd[1952]: time="2025-03-25T01:17:16.513469702Z" level=info msg="Container ce095c34222663726ffb025ae02637f2950b705f25a9052c994999b96f3d7e0b: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:17:16.560642 containerd[1952]: time="2025-03-25T01:17:16.560269486Z" level=info msg="CreateContainer within sandbox \"fd7f6520c27ff656b44459f10bd4beaa9cbb8ecec502704dd6f4e73055967885\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ce095c34222663726ffb025ae02637f2950b705f25a9052c994999b96f3d7e0b\"" Mar 25 01:17:16.564112 containerd[1952]: time="2025-03-25T01:17:16.564026416Z" level=info msg="StartContainer for \"ce095c34222663726ffb025ae02637f2950b705f25a9052c994999b96f3d7e0b\"" Mar 25 01:17:16.571447 containerd[1952]: time="2025-03-25T01:17:16.571164967Z" level=info msg="connecting to shim ce095c34222663726ffb025ae02637f2950b705f25a9052c994999b96f3d7e0b" address="unix:///run/containerd/s/d103b80f119bbfb1ff0c1dcac5f541bfa3cd1c997b7ec0e35c5462624420f958" protocol=ttrpc version=3 Mar 25 01:17:16.599256 containerd[1952]: time="2025-03-25T01:17:16.598790461Z" level=info msg="StartContainer for \"82334fc1400e58f39dfedd586d6fcb7368f0eb7ed0c6c5f2ea268622436a6d20\" returns successfully" Mar 25 01:17:16.630553 systemd[1]: Started cri-containerd-ce095c34222663726ffb025ae02637f2950b705f25a9052c994999b96f3d7e0b.scope - libcontainer container ce095c34222663726ffb025ae02637f2950b705f25a9052c994999b96f3d7e0b. Mar 25 01:17:16.705239 containerd[1952]: time="2025-03-25T01:17:16.704949171Z" level=info msg="StartContainer for \"ce095c34222663726ffb025ae02637f2950b705f25a9052c994999b96f3d7e0b\" returns successfully" Mar 25 01:17:17.049584 kubelet[3231]: I0325 01:17:17.048549 3231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-tphhc" podStartSLOduration=29.04850138 podStartE2EDuration="29.04850138s" podCreationTimestamp="2025-03-25 01:16:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:17:17.04784638 +0000 UTC m=+35.547993886" watchObservedRunningTime="2025-03-25 01:17:17.04850138 +0000 UTC m=+35.548648790" Mar 25 01:17:17.075200 kubelet[3231]: I0325 01:17:17.074924 3231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-cz5xn" podStartSLOduration=29.074161707 podStartE2EDuration="29.074161707s" podCreationTimestamp="2025-03-25 01:16:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:17:17.07225812 +0000 UTC m=+35.572405542" watchObservedRunningTime="2025-03-25 01:17:17.074161707 +0000 UTC m=+35.574309117" Mar 25 01:17:19.112300 systemd[1]: Started sshd@7-172.31.23.121:22-147.75.109.163:57094.service - OpenSSH per-connection server daemon (147.75.109.163:57094). Mar 25 01:17:19.304458 sshd[4626]: Accepted publickey for core from 147.75.109.163 port 57094 ssh2: RSA SHA256:PeYl6nTqnDkQzDdjcMK19FTRt4hKUhyBe4JyMaX2oCc Mar 25 01:17:19.307045 sshd-session[4626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:17:19.315481 systemd-logind[1936]: New session 8 of user core. Mar 25 01:17:19.324483 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 25 01:17:19.589267 sshd[4628]: Connection closed by 147.75.109.163 port 57094 Mar 25 01:17:19.590150 sshd-session[4626]: pam_unix(sshd:session): session closed for user core Mar 25 01:17:19.597479 systemd[1]: sshd@7-172.31.23.121:22-147.75.109.163:57094.service: Deactivated successfully. Mar 25 01:17:19.602023 systemd[1]: session-8.scope: Deactivated successfully. Mar 25 01:17:19.604613 systemd-logind[1936]: Session 8 logged out. Waiting for processes to exit. Mar 25 01:17:19.606858 systemd-logind[1936]: Removed session 8. Mar 25 01:17:24.626627 systemd[1]: Started sshd@8-172.31.23.121:22-147.75.109.163:38286.service - OpenSSH per-connection server daemon (147.75.109.163:38286). Mar 25 01:17:24.823827 sshd[4641]: Accepted publickey for core from 147.75.109.163 port 38286 ssh2: RSA SHA256:PeYl6nTqnDkQzDdjcMK19FTRt4hKUhyBe4JyMaX2oCc Mar 25 01:17:24.826430 sshd-session[4641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:17:24.835076 systemd-logind[1936]: New session 9 of user core. Mar 25 01:17:24.850490 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 25 01:17:25.097085 sshd[4643]: Connection closed by 147.75.109.163 port 38286 Mar 25 01:17:25.097587 sshd-session[4641]: pam_unix(sshd:session): session closed for user core Mar 25 01:17:25.105310 systemd[1]: sshd@8-172.31.23.121:22-147.75.109.163:38286.service: Deactivated successfully. Mar 25 01:17:25.109122 systemd[1]: session-9.scope: Deactivated successfully. Mar 25 01:17:25.111048 systemd-logind[1936]: Session 9 logged out. Waiting for processes to exit. Mar 25 01:17:25.113838 systemd-logind[1936]: Removed session 9. Mar 25 01:17:30.134755 systemd[1]: Started sshd@9-172.31.23.121:22-147.75.109.163:43226.service - OpenSSH per-connection server daemon (147.75.109.163:43226). Mar 25 01:17:30.330289 sshd[4655]: Accepted publickey for core from 147.75.109.163 port 43226 ssh2: RSA SHA256:PeYl6nTqnDkQzDdjcMK19FTRt4hKUhyBe4JyMaX2oCc Mar 25 01:17:30.332826 sshd-session[4655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:17:30.344193 systemd-logind[1936]: New session 10 of user core. Mar 25 01:17:30.351476 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 25 01:17:30.594772 sshd[4657]: Connection closed by 147.75.109.163 port 43226 Mar 25 01:17:30.595305 sshd-session[4655]: pam_unix(sshd:session): session closed for user core Mar 25 01:17:30.602198 systemd[1]: sshd@9-172.31.23.121:22-147.75.109.163:43226.service: Deactivated successfully. Mar 25 01:17:30.607402 systemd[1]: session-10.scope: Deactivated successfully. Mar 25 01:17:30.610713 systemd-logind[1936]: Session 10 logged out. Waiting for processes to exit. Mar 25 01:17:30.612899 systemd-logind[1936]: Removed session 10. Mar 25 01:17:35.631718 systemd[1]: Started sshd@10-172.31.23.121:22-147.75.109.163:43238.service - OpenSSH per-connection server daemon (147.75.109.163:43238). Mar 25 01:17:35.825725 sshd[4670]: Accepted publickey for core from 147.75.109.163 port 43238 ssh2: RSA SHA256:PeYl6nTqnDkQzDdjcMK19FTRt4hKUhyBe4JyMaX2oCc Mar 25 01:17:35.828243 sshd-session[4670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:17:35.837314 systemd-logind[1936]: New session 11 of user core. Mar 25 01:17:35.848480 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 25 01:17:36.097007 sshd[4672]: Connection closed by 147.75.109.163 port 43238 Mar 25 01:17:36.097871 sshd-session[4670]: pam_unix(sshd:session): session closed for user core Mar 25 01:17:36.105690 systemd-logind[1936]: Session 11 logged out. Waiting for processes to exit. Mar 25 01:17:36.106522 systemd[1]: sshd@10-172.31.23.121:22-147.75.109.163:43238.service: Deactivated successfully. Mar 25 01:17:36.111674 systemd[1]: session-11.scope: Deactivated successfully. Mar 25 01:17:36.114196 systemd-logind[1936]: Removed session 11. Mar 25 01:17:41.135641 systemd[1]: Started sshd@11-172.31.23.121:22-147.75.109.163:60048.service - OpenSSH per-connection server daemon (147.75.109.163:60048). Mar 25 01:17:41.325096 sshd[4685]: Accepted publickey for core from 147.75.109.163 port 60048 ssh2: RSA SHA256:PeYl6nTqnDkQzDdjcMK19FTRt4hKUhyBe4JyMaX2oCc Mar 25 01:17:41.327517 sshd-session[4685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:17:41.336092 systemd-logind[1936]: New session 12 of user core. Mar 25 01:17:41.348504 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 25 01:17:41.590345 sshd[4687]: Connection closed by 147.75.109.163 port 60048 Mar 25 01:17:41.591175 sshd-session[4685]: pam_unix(sshd:session): session closed for user core Mar 25 01:17:41.596511 systemd[1]: sshd@11-172.31.23.121:22-147.75.109.163:60048.service: Deactivated successfully. Mar 25 01:17:41.601981 systemd[1]: session-12.scope: Deactivated successfully. Mar 25 01:17:41.605193 systemd-logind[1936]: Session 12 logged out. Waiting for processes to exit. Mar 25 01:17:41.608199 systemd-logind[1936]: Removed session 12. Mar 25 01:17:41.625736 systemd[1]: Started sshd@12-172.31.23.121:22-147.75.109.163:60060.service - OpenSSH per-connection server daemon (147.75.109.163:60060). Mar 25 01:17:41.827156 sshd[4700]: Accepted publickey for core from 147.75.109.163 port 60060 ssh2: RSA SHA256:PeYl6nTqnDkQzDdjcMK19FTRt4hKUhyBe4JyMaX2oCc Mar 25 01:17:41.829734 sshd-session[4700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:17:41.837176 systemd-logind[1936]: New session 13 of user core. Mar 25 01:17:41.855464 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 25 01:17:42.163744 sshd[4704]: Connection closed by 147.75.109.163 port 60060 Mar 25 01:17:42.163595 sshd-session[4700]: pam_unix(sshd:session): session closed for user core Mar 25 01:17:42.175747 systemd-logind[1936]: Session 13 logged out. Waiting for processes to exit. Mar 25 01:17:42.177702 systemd[1]: sshd@12-172.31.23.121:22-147.75.109.163:60060.service: Deactivated successfully. Mar 25 01:17:42.184236 systemd[1]: session-13.scope: Deactivated successfully. Mar 25 01:17:42.210674 systemd[1]: Started sshd@13-172.31.23.121:22-147.75.109.163:60062.service - OpenSSH per-connection server daemon (147.75.109.163:60062). Mar 25 01:17:42.214395 systemd-logind[1936]: Removed session 13. Mar 25 01:17:42.407709 sshd[4713]: Accepted publickey for core from 147.75.109.163 port 60062 ssh2: RSA SHA256:PeYl6nTqnDkQzDdjcMK19FTRt4hKUhyBe4JyMaX2oCc Mar 25 01:17:42.414439 sshd-session[4713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:17:42.424976 systemd-logind[1936]: New session 14 of user core. Mar 25 01:17:42.432451 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 25 01:17:42.672005 sshd[4717]: Connection closed by 147.75.109.163 port 60062 Mar 25 01:17:42.671689 sshd-session[4713]: pam_unix(sshd:session): session closed for user core Mar 25 01:17:42.679262 systemd[1]: sshd@13-172.31.23.121:22-147.75.109.163:60062.service: Deactivated successfully. Mar 25 01:17:42.684945 systemd[1]: session-14.scope: Deactivated successfully. Mar 25 01:17:42.687453 systemd-logind[1936]: Session 14 logged out. Waiting for processes to exit. Mar 25 01:17:42.689438 systemd-logind[1936]: Removed session 14. Mar 25 01:17:47.713332 systemd[1]: Started sshd@14-172.31.23.121:22-147.75.109.163:60068.service - OpenSSH per-connection server daemon (147.75.109.163:60068). Mar 25 01:17:47.909120 sshd[4730]: Accepted publickey for core from 147.75.109.163 port 60068 ssh2: RSA SHA256:PeYl6nTqnDkQzDdjcMK19FTRt4hKUhyBe4JyMaX2oCc Mar 25 01:17:47.910240 sshd-session[4730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:17:47.920868 systemd-logind[1936]: New session 15 of user core. Mar 25 01:17:47.933512 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 25 01:17:48.183036 sshd[4732]: Connection closed by 147.75.109.163 port 60068 Mar 25 01:17:48.183941 sshd-session[4730]: pam_unix(sshd:session): session closed for user core Mar 25 01:17:48.189400 systemd[1]: sshd@14-172.31.23.121:22-147.75.109.163:60068.service: Deactivated successfully. Mar 25 01:17:48.194110 systemd[1]: session-15.scope: Deactivated successfully. Mar 25 01:17:48.198732 systemd-logind[1936]: Session 15 logged out. Waiting for processes to exit. Mar 25 01:17:48.200710 systemd-logind[1936]: Removed session 15. Mar 25 01:17:53.220165 systemd[1]: Started sshd@15-172.31.23.121:22-147.75.109.163:59268.service - OpenSSH per-connection server daemon (147.75.109.163:59268). Mar 25 01:17:53.414324 sshd[4746]: Accepted publickey for core from 147.75.109.163 port 59268 ssh2: RSA SHA256:PeYl6nTqnDkQzDdjcMK19FTRt4hKUhyBe4JyMaX2oCc Mar 25 01:17:53.416799 sshd-session[4746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:17:53.425983 systemd-logind[1936]: New session 16 of user core. Mar 25 01:17:53.430443 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 25 01:17:53.670528 sshd[4748]: Connection closed by 147.75.109.163 port 59268 Mar 25 01:17:53.671617 sshd-session[4746]: pam_unix(sshd:session): session closed for user core Mar 25 01:17:53.677586 systemd[1]: sshd@15-172.31.23.121:22-147.75.109.163:59268.service: Deactivated successfully. Mar 25 01:17:53.681155 systemd[1]: session-16.scope: Deactivated successfully. Mar 25 01:17:53.684908 systemd-logind[1936]: Session 16 logged out. Waiting for processes to exit. Mar 25 01:17:53.686903 systemd-logind[1936]: Removed session 16. Mar 25 01:17:58.707470 systemd[1]: Started sshd@16-172.31.23.121:22-147.75.109.163:59284.service - OpenSSH per-connection server daemon (147.75.109.163:59284). Mar 25 01:17:58.901556 sshd[4761]: Accepted publickey for core from 147.75.109.163 port 59284 ssh2: RSA SHA256:PeYl6nTqnDkQzDdjcMK19FTRt4hKUhyBe4JyMaX2oCc Mar 25 01:17:58.904070 sshd-session[4761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:17:58.912573 systemd-logind[1936]: New session 17 of user core. Mar 25 01:17:58.921775 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 25 01:17:59.166853 sshd[4763]: Connection closed by 147.75.109.163 port 59284 Mar 25 01:17:59.167481 sshd-session[4761]: pam_unix(sshd:session): session closed for user core Mar 25 01:17:59.174747 systemd[1]: sshd@16-172.31.23.121:22-147.75.109.163:59284.service: Deactivated successfully. Mar 25 01:17:59.178963 systemd[1]: session-17.scope: Deactivated successfully. Mar 25 01:17:59.181966 systemd-logind[1936]: Session 17 logged out. Waiting for processes to exit. Mar 25 01:17:59.184644 systemd-logind[1936]: Removed session 17. Mar 25 01:17:59.204652 systemd[1]: Started sshd@17-172.31.23.121:22-147.75.109.163:59288.service - OpenSSH per-connection server daemon (147.75.109.163:59288). Mar 25 01:17:59.403637 sshd[4775]: Accepted publickey for core from 147.75.109.163 port 59288 ssh2: RSA SHA256:PeYl6nTqnDkQzDdjcMK19FTRt4hKUhyBe4JyMaX2oCc Mar 25 01:17:59.406126 sshd-session[4775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:17:59.413750 systemd-logind[1936]: New session 18 of user core. Mar 25 01:17:59.420441 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 25 01:17:59.720264 sshd[4777]: Connection closed by 147.75.109.163 port 59288 Mar 25 01:17:59.721101 sshd-session[4775]: pam_unix(sshd:session): session closed for user core Mar 25 01:17:59.726813 systemd[1]: sshd@17-172.31.23.121:22-147.75.109.163:59288.service: Deactivated successfully. Mar 25 01:17:59.730048 systemd[1]: session-18.scope: Deactivated successfully. Mar 25 01:17:59.733083 systemd-logind[1936]: Session 18 logged out. Waiting for processes to exit. Mar 25 01:17:59.735103 systemd-logind[1936]: Removed session 18. Mar 25 01:17:59.754467 systemd[1]: Started sshd@18-172.31.23.121:22-147.75.109.163:59294.service - OpenSSH per-connection server daemon (147.75.109.163:59294). Mar 25 01:17:59.952118 sshd[4787]: Accepted publickey for core from 147.75.109.163 port 59294 ssh2: RSA SHA256:PeYl6nTqnDkQzDdjcMK19FTRt4hKUhyBe4JyMaX2oCc Mar 25 01:17:59.953771 sshd-session[4787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:17:59.962815 systemd-logind[1936]: New session 19 of user core. Mar 25 01:17:59.967477 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 25 01:18:02.518494 sshd[4789]: Connection closed by 147.75.109.163 port 59294 Mar 25 01:18:02.520700 sshd-session[4787]: pam_unix(sshd:session): session closed for user core Mar 25 01:18:02.528755 systemd[1]: session-19.scope: Deactivated successfully. Mar 25 01:18:02.532030 systemd[1]: sshd@18-172.31.23.121:22-147.75.109.163:59294.service: Deactivated successfully. Mar 25 01:18:02.542180 systemd-logind[1936]: Session 19 logged out. Waiting for processes to exit. Mar 25 01:18:02.563933 systemd[1]: Started sshd@19-172.31.23.121:22-147.75.109.163:50728.service - OpenSSH per-connection server daemon (147.75.109.163:50728). Mar 25 01:18:02.569257 systemd-logind[1936]: Removed session 19. Mar 25 01:18:02.771463 sshd[4803]: Accepted publickey for core from 147.75.109.163 port 50728 ssh2: RSA SHA256:PeYl6nTqnDkQzDdjcMK19FTRt4hKUhyBe4JyMaX2oCc Mar 25 01:18:02.774345 sshd-session[4803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:18:02.783635 systemd-logind[1936]: New session 20 of user core. Mar 25 01:18:02.791475 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 25 01:18:03.279261 sshd[4808]: Connection closed by 147.75.109.163 port 50728 Mar 25 01:18:03.278822 sshd-session[4803]: pam_unix(sshd:session): session closed for user core Mar 25 01:18:03.288342 systemd[1]: sshd@19-172.31.23.121:22-147.75.109.163:50728.service: Deactivated successfully. Mar 25 01:18:03.293829 systemd[1]: session-20.scope: Deactivated successfully. Mar 25 01:18:03.301617 systemd-logind[1936]: Session 20 logged out. Waiting for processes to exit. Mar 25 01:18:03.324672 systemd[1]: Started sshd@20-172.31.23.121:22-147.75.109.163:50738.service - OpenSSH per-connection server daemon (147.75.109.163:50738). Mar 25 01:18:03.329369 systemd-logind[1936]: Removed session 20. Mar 25 01:18:03.533465 sshd[4817]: Accepted publickey for core from 147.75.109.163 port 50738 ssh2: RSA SHA256:PeYl6nTqnDkQzDdjcMK19FTRt4hKUhyBe4JyMaX2oCc Mar 25 01:18:03.535959 sshd-session[4817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:18:03.545188 systemd-logind[1936]: New session 21 of user core. Mar 25 01:18:03.553484 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 25 01:18:03.796694 sshd[4821]: Connection closed by 147.75.109.163 port 50738 Mar 25 01:18:03.797255 sshd-session[4817]: pam_unix(sshd:session): session closed for user core Mar 25 01:18:03.802140 systemd-logind[1936]: Session 21 logged out. Waiting for processes to exit. Mar 25 01:18:03.802779 systemd[1]: sshd@20-172.31.23.121:22-147.75.109.163:50738.service: Deactivated successfully. Mar 25 01:18:03.807132 systemd[1]: session-21.scope: Deactivated successfully. Mar 25 01:18:03.812005 systemd-logind[1936]: Removed session 21. Mar 25 01:18:08.831013 systemd[1]: Started sshd@21-172.31.23.121:22-147.75.109.163:50748.service - OpenSSH per-connection server daemon (147.75.109.163:50748). Mar 25 01:18:09.029530 sshd[4834]: Accepted publickey for core from 147.75.109.163 port 50748 ssh2: RSA SHA256:PeYl6nTqnDkQzDdjcMK19FTRt4hKUhyBe4JyMaX2oCc Mar 25 01:18:09.033450 sshd-session[4834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:18:09.042558 systemd-logind[1936]: New session 22 of user core. Mar 25 01:18:09.047478 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 25 01:18:09.286309 sshd[4836]: Connection closed by 147.75.109.163 port 50748 Mar 25 01:18:09.286515 sshd-session[4834]: pam_unix(sshd:session): session closed for user core Mar 25 01:18:09.294294 systemd[1]: sshd@21-172.31.23.121:22-147.75.109.163:50748.service: Deactivated successfully. Mar 25 01:18:09.303017 systemd[1]: session-22.scope: Deactivated successfully. Mar 25 01:18:09.307527 systemd-logind[1936]: Session 22 logged out. Waiting for processes to exit. Mar 25 01:18:09.309718 systemd-logind[1936]: Removed session 22. Mar 25 01:18:14.323139 systemd[1]: Started sshd@22-172.31.23.121:22-147.75.109.163:58658.service - OpenSSH per-connection server daemon (147.75.109.163:58658). Mar 25 01:18:14.523343 sshd[4850]: Accepted publickey for core from 147.75.109.163 port 58658 ssh2: RSA SHA256:PeYl6nTqnDkQzDdjcMK19FTRt4hKUhyBe4JyMaX2oCc Mar 25 01:18:14.525841 sshd-session[4850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:18:14.534538 systemd-logind[1936]: New session 23 of user core. Mar 25 01:18:14.542484 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 25 01:18:14.792742 sshd[4852]: Connection closed by 147.75.109.163 port 58658 Mar 25 01:18:14.793636 sshd-session[4850]: pam_unix(sshd:session): session closed for user core Mar 25 01:18:14.801663 systemd[1]: sshd@22-172.31.23.121:22-147.75.109.163:58658.service: Deactivated successfully. Mar 25 01:18:14.811597 systemd[1]: session-23.scope: Deactivated successfully. Mar 25 01:18:14.816171 systemd-logind[1936]: Session 23 logged out. Waiting for processes to exit. Mar 25 01:18:14.818853 systemd-logind[1936]: Removed session 23. Mar 25 01:18:19.831118 systemd[1]: Started sshd@23-172.31.23.121:22-147.75.109.163:58662.service - OpenSSH per-connection server daemon (147.75.109.163:58662). Mar 25 01:18:20.031585 sshd[4867]: Accepted publickey for core from 147.75.109.163 port 58662 ssh2: RSA SHA256:PeYl6nTqnDkQzDdjcMK19FTRt4hKUhyBe4JyMaX2oCc Mar 25 01:18:20.034030 sshd-session[4867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:18:20.042127 systemd-logind[1936]: New session 24 of user core. Mar 25 01:18:20.053460 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 25 01:18:20.304504 sshd[4869]: Connection closed by 147.75.109.163 port 58662 Mar 25 01:18:20.305404 sshd-session[4867]: pam_unix(sshd:session): session closed for user core Mar 25 01:18:20.312614 systemd[1]: sshd@23-172.31.23.121:22-147.75.109.163:58662.service: Deactivated successfully. Mar 25 01:18:20.316468 systemd[1]: session-24.scope: Deactivated successfully. Mar 25 01:18:20.318745 systemd-logind[1936]: Session 24 logged out. Waiting for processes to exit. Mar 25 01:18:20.320970 systemd-logind[1936]: Removed session 24. Mar 25 01:18:25.341516 systemd[1]: Started sshd@24-172.31.23.121:22-147.75.109.163:57012.service - OpenSSH per-connection server daemon (147.75.109.163:57012). Mar 25 01:18:25.534349 sshd[4881]: Accepted publickey for core from 147.75.109.163 port 57012 ssh2: RSA SHA256:PeYl6nTqnDkQzDdjcMK19FTRt4hKUhyBe4JyMaX2oCc Mar 25 01:18:25.537658 sshd-session[4881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:18:25.548947 systemd-logind[1936]: New session 25 of user core. Mar 25 01:18:25.555493 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 25 01:18:25.795995 sshd[4883]: Connection closed by 147.75.109.163 port 57012 Mar 25 01:18:25.796878 sshd-session[4881]: pam_unix(sshd:session): session closed for user core Mar 25 01:18:25.803832 systemd[1]: sshd@24-172.31.23.121:22-147.75.109.163:57012.service: Deactivated successfully. Mar 25 01:18:25.808782 systemd[1]: session-25.scope: Deactivated successfully. Mar 25 01:18:25.811124 systemd-logind[1936]: Session 25 logged out. Waiting for processes to exit. Mar 25 01:18:25.813604 systemd-logind[1936]: Removed session 25. Mar 25 01:18:25.835023 systemd[1]: Started sshd@25-172.31.23.121:22-147.75.109.163:57018.service - OpenSSH per-connection server daemon (147.75.109.163:57018). Mar 25 01:18:26.031268 sshd[4895]: Accepted publickey for core from 147.75.109.163 port 57018 ssh2: RSA SHA256:PeYl6nTqnDkQzDdjcMK19FTRt4hKUhyBe4JyMaX2oCc Mar 25 01:18:26.033690 sshd-session[4895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:18:26.041650 systemd-logind[1936]: New session 26 of user core. Mar 25 01:18:26.050477 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 25 01:18:29.803779 containerd[1952]: time="2025-03-25T01:18:29.803624860Z" level=info msg="StopContainer for \"8b0b70370e494cf0cf055e2212061f3444dfe1f95741263ec065a40b15180c78\" with timeout 30 (s)" Mar 25 01:18:29.805796 containerd[1952]: time="2025-03-25T01:18:29.805367404Z" level=info msg="Stop container \"8b0b70370e494cf0cf055e2212061f3444dfe1f95741263ec065a40b15180c78\" with signal terminated" Mar 25 01:18:29.830474 systemd[1]: cri-containerd-8b0b70370e494cf0cf055e2212061f3444dfe1f95741263ec065a40b15180c78.scope: Deactivated successfully. Mar 25 01:18:29.839622 containerd[1952]: time="2025-03-25T01:18:29.839495008Z" level=info msg="received exit event container_id:\"8b0b70370e494cf0cf055e2212061f3444dfe1f95741263ec065a40b15180c78\" id:\"8b0b70370e494cf0cf055e2212061f3444dfe1f95741263ec065a40b15180c78\" pid:3884 exited_at:{seconds:1742865509 nanos:838186648}" Mar 25 01:18:29.842188 containerd[1952]: time="2025-03-25T01:18:29.842029696Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8b0b70370e494cf0cf055e2212061f3444dfe1f95741263ec065a40b15180c78\" id:\"8b0b70370e494cf0cf055e2212061f3444dfe1f95741263ec065a40b15180c78\" pid:3884 exited_at:{seconds:1742865509 nanos:838186648}" Mar 25 01:18:29.850629 containerd[1952]: time="2025-03-25T01:18:29.850555756Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 25 01:18:29.863942 containerd[1952]: time="2025-03-25T01:18:29.863868112Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6390845a1c19b7f50fdec224b0f97d27ba4786baac7ee89aff690277a90c6f22\" id:\"d1d344d9c17adf6039b4500750b4ce7c7d1286226b22e9f203354e0cc343251a\" pid:4924 exited_at:{seconds:1742865509 nanos:863033452}" Mar 25 01:18:29.872031 containerd[1952]: time="2025-03-25T01:18:29.871792264Z" level=info msg="StopContainer for \"6390845a1c19b7f50fdec224b0f97d27ba4786baac7ee89aff690277a90c6f22\" with timeout 2 (s)" Mar 25 01:18:29.872924 containerd[1952]: time="2025-03-25T01:18:29.872882344Z" level=info msg="Stop container \"6390845a1c19b7f50fdec224b0f97d27ba4786baac7ee89aff690277a90c6f22\" with signal terminated" Mar 25 01:18:29.895050 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b0b70370e494cf0cf055e2212061f3444dfe1f95741263ec065a40b15180c78-rootfs.mount: Deactivated successfully. Mar 25 01:18:29.897959 systemd-networkd[1856]: lxc_health: Link DOWN Mar 25 01:18:29.897974 systemd-networkd[1856]: lxc_health: Lost carrier Mar 25 01:18:29.924360 systemd[1]: cri-containerd-6390845a1c19b7f50fdec224b0f97d27ba4786baac7ee89aff690277a90c6f22.scope: Deactivated successfully. Mar 25 01:18:29.924924 systemd[1]: cri-containerd-6390845a1c19b7f50fdec224b0f97d27ba4786baac7ee89aff690277a90c6f22.scope: Consumed 14.097s CPU time, 123.4M memory peak, 152K read from disk, 12.9M written to disk. Mar 25 01:18:29.942260 containerd[1952]: time="2025-03-25T01:18:29.942187360Z" level=info msg="received exit event container_id:\"6390845a1c19b7f50fdec224b0f97d27ba4786baac7ee89aff690277a90c6f22\" id:\"6390845a1c19b7f50fdec224b0f97d27ba4786baac7ee89aff690277a90c6f22\" pid:3957 exited_at:{seconds:1742865509 nanos:940378696}" Mar 25 01:18:29.942682 containerd[1952]: time="2025-03-25T01:18:29.942571912Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6390845a1c19b7f50fdec224b0f97d27ba4786baac7ee89aff690277a90c6f22\" id:\"6390845a1c19b7f50fdec224b0f97d27ba4786baac7ee89aff690277a90c6f22\" pid:3957 exited_at:{seconds:1742865509 nanos:940378696}" Mar 25 01:18:29.946255 containerd[1952]: time="2025-03-25T01:18:29.945809176Z" level=info msg="StopContainer for \"8b0b70370e494cf0cf055e2212061f3444dfe1f95741263ec065a40b15180c78\" returns successfully" Mar 25 01:18:29.950995 containerd[1952]: time="2025-03-25T01:18:29.950929637Z" level=info msg="StopPodSandbox for \"04d71d725d6fd78e94d33ff987ae2b3091111719e0191d0dd751ef843fe09bb3\"" Mar 25 01:18:29.951326 containerd[1952]: time="2025-03-25T01:18:29.951271793Z" level=info msg="Container to stop \"8b0b70370e494cf0cf055e2212061f3444dfe1f95741263ec065a40b15180c78\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:18:29.977490 systemd[1]: cri-containerd-04d71d725d6fd78e94d33ff987ae2b3091111719e0191d0dd751ef843fe09bb3.scope: Deactivated successfully. Mar 25 01:18:29.989380 containerd[1952]: time="2025-03-25T01:18:29.989059073Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04d71d725d6fd78e94d33ff987ae2b3091111719e0191d0dd751ef843fe09bb3\" id:\"04d71d725d6fd78e94d33ff987ae2b3091111719e0191d0dd751ef843fe09bb3\" pid:3552 exit_status:137 exited_at:{seconds:1742865509 nanos:988543229}" Mar 25 01:18:30.012736 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6390845a1c19b7f50fdec224b0f97d27ba4786baac7ee89aff690277a90c6f22-rootfs.mount: Deactivated successfully. Mar 25 01:18:30.035233 containerd[1952]: time="2025-03-25T01:18:30.035092141Z" level=info msg="StopContainer for \"6390845a1c19b7f50fdec224b0f97d27ba4786baac7ee89aff690277a90c6f22\" returns successfully" Mar 25 01:18:30.036550 containerd[1952]: time="2025-03-25T01:18:30.036469897Z" level=info msg="StopPodSandbox for \"6e0ac1efd4742bde33d676b94667f6daf9ea5292b401b3846ef47a5f51342946\"" Mar 25 01:18:30.036870 containerd[1952]: time="2025-03-25T01:18:30.036746257Z" level=info msg="Container to stop \"812f9149f810ab1d866d5988a5a1b5a36b2481d81c229a616b28d73714371dd8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:18:30.036870 containerd[1952]: time="2025-03-25T01:18:30.036807073Z" level=info msg="Container to stop \"6390845a1c19b7f50fdec224b0f97d27ba4786baac7ee89aff690277a90c6f22\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:18:30.036870 containerd[1952]: time="2025-03-25T01:18:30.036831445Z" level=info msg="Container to stop \"fdff90c58d3476db8c6e1e5aefe7609224803961a1de23e0fa383d45be1e5638\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:18:30.037131 containerd[1952]: time="2025-03-25T01:18:30.037050025Z" level=info msg="Container to stop \"bb7c7f7624aee87dbf4f2efe5ecf8fc97eb916f6d082116ea3f83b05738909c3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:18:30.037131 containerd[1952]: time="2025-03-25T01:18:30.037081069Z" level=info msg="Container to stop \"7238cba5797a51305268e93b4ac2451be285f8d148307c5bb2ee2ce0917a2f4d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:18:30.058404 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04d71d725d6fd78e94d33ff987ae2b3091111719e0191d0dd751ef843fe09bb3-rootfs.mount: Deactivated successfully. Mar 25 01:18:30.061901 systemd[1]: cri-containerd-6e0ac1efd4742bde33d676b94667f6daf9ea5292b401b3846ef47a5f51342946.scope: Deactivated successfully. Mar 25 01:18:30.066866 containerd[1952]: time="2025-03-25T01:18:30.065055085Z" level=info msg="shim disconnected" id=04d71d725d6fd78e94d33ff987ae2b3091111719e0191d0dd751ef843fe09bb3 namespace=k8s.io Mar 25 01:18:30.066866 containerd[1952]: time="2025-03-25T01:18:30.065124997Z" level=warning msg="cleaning up after shim disconnected" id=04d71d725d6fd78e94d33ff987ae2b3091111719e0191d0dd751ef843fe09bb3 namespace=k8s.io Mar 25 01:18:30.066866 containerd[1952]: time="2025-03-25T01:18:30.065172937Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 25 01:18:30.092350 containerd[1952]: time="2025-03-25T01:18:30.092275153Z" level=warning msg="cleanup warnings time=\"2025-03-25T01:18:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 25 01:18:30.096968 containerd[1952]: time="2025-03-25T01:18:30.096765205Z" level=info msg="received exit event sandbox_id:\"04d71d725d6fd78e94d33ff987ae2b3091111719e0191d0dd751ef843fe09bb3\" exit_status:137 exited_at:{seconds:1742865509 nanos:988543229}" Mar 25 01:18:30.103482 containerd[1952]: time="2025-03-25T01:18:30.102870733Z" level=info msg="TearDown network for sandbox \"04d71d725d6fd78e94d33ff987ae2b3091111719e0191d0dd751ef843fe09bb3\" successfully" Mar 25 01:18:30.103637 containerd[1952]: time="2025-03-25T01:18:30.103480333Z" level=info msg="StopPodSandbox for \"04d71d725d6fd78e94d33ff987ae2b3091111719e0191d0dd751ef843fe09bb3\" returns successfully" Mar 25 01:18:30.103637 containerd[1952]: time="2025-03-25T01:18:30.103422361Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e0ac1efd4742bde33d676b94667f6daf9ea5292b401b3846ef47a5f51342946\" id:\"6e0ac1efd4742bde33d676b94667f6daf9ea5292b401b3846ef47a5f51342946\" pid:3469 exit_status:137 exited_at:{seconds:1742865510 nanos:60944845}" Mar 25 01:18:30.104148 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-04d71d725d6fd78e94d33ff987ae2b3091111719e0191d0dd751ef843fe09bb3-shm.mount: Deactivated successfully. Mar 25 01:18:30.120633 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e0ac1efd4742bde33d676b94667f6daf9ea5292b401b3846ef47a5f51342946-rootfs.mount: Deactivated successfully. Mar 25 01:18:30.126682 containerd[1952]: time="2025-03-25T01:18:30.126369649Z" level=info msg="received exit event sandbox_id:\"6e0ac1efd4742bde33d676b94667f6daf9ea5292b401b3846ef47a5f51342946\" exit_status:137 exited_at:{seconds:1742865510 nanos:60944845}" Mar 25 01:18:30.129754 containerd[1952]: time="2025-03-25T01:18:30.129536317Z" level=info msg="shim disconnected" id=6e0ac1efd4742bde33d676b94667f6daf9ea5292b401b3846ef47a5f51342946 namespace=k8s.io Mar 25 01:18:30.129754 containerd[1952]: time="2025-03-25T01:18:30.129649717Z" level=warning msg="cleaning up after shim disconnected" id=6e0ac1efd4742bde33d676b94667f6daf9ea5292b401b3846ef47a5f51342946 namespace=k8s.io Mar 25 01:18:30.129754 containerd[1952]: time="2025-03-25T01:18:30.129731413Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 25 01:18:30.133505 containerd[1952]: time="2025-03-25T01:18:30.132915817Z" level=info msg="TearDown network for sandbox \"6e0ac1efd4742bde33d676b94667f6daf9ea5292b401b3846ef47a5f51342946\" successfully" Mar 25 01:18:30.133505 containerd[1952]: time="2025-03-25T01:18:30.132967105Z" level=info msg="StopPodSandbox for \"6e0ac1efd4742bde33d676b94667f6daf9ea5292b401b3846ef47a5f51342946\" returns successfully" Mar 25 01:18:30.219676 kubelet[3231]: I0325 01:18:30.219485 3231 scope.go:117] "RemoveContainer" containerID="6390845a1c19b7f50fdec224b0f97d27ba4786baac7ee89aff690277a90c6f22" Mar 25 01:18:30.222873 kubelet[3231]: I0325 01:18:30.221494 3231 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0969260e-64dc-4bd0-bf5c-fdf82c8250da-clustermesh-secrets\") pod \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\" (UID: \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\") " Mar 25 01:18:30.223718 kubelet[3231]: I0325 01:18:30.223648 3231 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0969260e-64dc-4bd0-bf5c-fdf82c8250da-hubble-tls\") pod \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\" (UID: \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\") " Mar 25 01:18:30.224271 kubelet[3231]: I0325 01:18:30.223975 3231 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/33fcaa15-597f-4e8a-ba47-2c25cdf67270-cilium-config-path\") pod \"33fcaa15-597f-4e8a-ba47-2c25cdf67270\" (UID: \"33fcaa15-597f-4e8a-ba47-2c25cdf67270\") " Mar 25 01:18:30.226899 kubelet[3231]: I0325 01:18:30.225507 3231 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0969260e-64dc-4bd0-bf5c-fdf82c8250da-lib-modules\") pod \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\" (UID: \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\") " Mar 25 01:18:30.226899 kubelet[3231]: I0325 01:18:30.225576 3231 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gk282\" (UniqueName: \"kubernetes.io/projected/0969260e-64dc-4bd0-bf5c-fdf82c8250da-kube-api-access-gk282\") pod \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\" (UID: \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\") " Mar 25 01:18:30.226899 kubelet[3231]: I0325 01:18:30.225623 3231 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0969260e-64dc-4bd0-bf5c-fdf82c8250da-cilium-config-path\") pod \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\" (UID: \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\") " Mar 25 01:18:30.226899 kubelet[3231]: I0325 01:18:30.225659 3231 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0969260e-64dc-4bd0-bf5c-fdf82c8250da-xtables-lock\") pod \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\" (UID: \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\") " Mar 25 01:18:30.226899 kubelet[3231]: I0325 01:18:30.225693 3231 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0969260e-64dc-4bd0-bf5c-fdf82c8250da-etc-cni-netd\") pod \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\" (UID: \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\") " Mar 25 01:18:30.226899 kubelet[3231]: I0325 01:18:30.225728 3231 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0969260e-64dc-4bd0-bf5c-fdf82c8250da-cilium-cgroup\") pod \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\" (UID: \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\") " Mar 25 01:18:30.227360 kubelet[3231]: I0325 01:18:30.225760 3231 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0969260e-64dc-4bd0-bf5c-fdf82c8250da-host-proc-sys-kernel\") pod \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\" (UID: \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\") " Mar 25 01:18:30.227360 kubelet[3231]: I0325 01:18:30.225792 3231 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0969260e-64dc-4bd0-bf5c-fdf82c8250da-cilium-run\") pod \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\" (UID: \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\") " Mar 25 01:18:30.227360 kubelet[3231]: I0325 01:18:30.225828 3231 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7d97d\" (UniqueName: \"kubernetes.io/projected/33fcaa15-597f-4e8a-ba47-2c25cdf67270-kube-api-access-7d97d\") pod \"33fcaa15-597f-4e8a-ba47-2c25cdf67270\" (UID: \"33fcaa15-597f-4e8a-ba47-2c25cdf67270\") " Mar 25 01:18:30.227360 kubelet[3231]: I0325 01:18:30.225863 3231 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0969260e-64dc-4bd0-bf5c-fdf82c8250da-cni-path\") pod \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\" (UID: \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\") " Mar 25 01:18:30.227360 kubelet[3231]: I0325 01:18:30.225896 3231 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0969260e-64dc-4bd0-bf5c-fdf82c8250da-hostproc\") pod \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\" (UID: \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\") " Mar 25 01:18:30.227360 kubelet[3231]: I0325 01:18:30.225933 3231 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0969260e-64dc-4bd0-bf5c-fdf82c8250da-bpf-maps\") pod \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\" (UID: \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\") " Mar 25 01:18:30.227663 kubelet[3231]: I0325 01:18:30.225965 3231 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0969260e-64dc-4bd0-bf5c-fdf82c8250da-host-proc-sys-net\") pod \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\" (UID: \"0969260e-64dc-4bd0-bf5c-fdf82c8250da\") " Mar 25 01:18:30.227663 kubelet[3231]: I0325 01:18:30.226055 3231 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0969260e-64dc-4bd0-bf5c-fdf82c8250da-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0969260e-64dc-4bd0-bf5c-fdf82c8250da" (UID: "0969260e-64dc-4bd0-bf5c-fdf82c8250da"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:18:30.227663 kubelet[3231]: I0325 01:18:30.226116 3231 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0969260e-64dc-4bd0-bf5c-fdf82c8250da-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0969260e-64dc-4bd0-bf5c-fdf82c8250da" (UID: "0969260e-64dc-4bd0-bf5c-fdf82c8250da"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:18:30.233962 kubelet[3231]: I0325 01:18:30.233735 3231 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0969260e-64dc-4bd0-bf5c-fdf82c8250da-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0969260e-64dc-4bd0-bf5c-fdf82c8250da" (UID: "0969260e-64dc-4bd0-bf5c-fdf82c8250da"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:18:30.234332 kubelet[3231]: I0325 01:18:30.233841 3231 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0969260e-64dc-4bd0-bf5c-fdf82c8250da-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0969260e-64dc-4bd0-bf5c-fdf82c8250da" (UID: "0969260e-64dc-4bd0-bf5c-fdf82c8250da"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:18:30.234521 kubelet[3231]: I0325 01:18:30.234444 3231 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0969260e-64dc-4bd0-bf5c-fdf82c8250da-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0969260e-64dc-4bd0-bf5c-fdf82c8250da" (UID: "0969260e-64dc-4bd0-bf5c-fdf82c8250da"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 25 01:18:30.234794 kubelet[3231]: I0325 01:18:30.234193 3231 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0969260e-64dc-4bd0-bf5c-fdf82c8250da-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0969260e-64dc-4bd0-bf5c-fdf82c8250da" (UID: "0969260e-64dc-4bd0-bf5c-fdf82c8250da"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:18:30.235063 kubelet[3231]: I0325 01:18:30.234946 3231 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0969260e-64dc-4bd0-bf5c-fdf82c8250da-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0969260e-64dc-4bd0-bf5c-fdf82c8250da" (UID: "0969260e-64dc-4bd0-bf5c-fdf82c8250da"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:18:30.235276 kubelet[3231]: I0325 01:18:30.235033 3231 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0969260e-64dc-4bd0-bf5c-fdf82c8250da-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0969260e-64dc-4bd0-bf5c-fdf82c8250da" (UID: "0969260e-64dc-4bd0-bf5c-fdf82c8250da"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:18:30.236891 kubelet[3231]: I0325 01:18:30.236828 3231 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0969260e-64dc-4bd0-bf5c-fdf82c8250da-cni-path" (OuterVolumeSpecName: "cni-path") pod "0969260e-64dc-4bd0-bf5c-fdf82c8250da" (UID: "0969260e-64dc-4bd0-bf5c-fdf82c8250da"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:18:30.237064 kubelet[3231]: I0325 01:18:30.236908 3231 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0969260e-64dc-4bd0-bf5c-fdf82c8250da-hostproc" (OuterVolumeSpecName: "hostproc") pod "0969260e-64dc-4bd0-bf5c-fdf82c8250da" (UID: "0969260e-64dc-4bd0-bf5c-fdf82c8250da"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:18:30.237064 kubelet[3231]: I0325 01:18:30.236953 3231 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0969260e-64dc-4bd0-bf5c-fdf82c8250da-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0969260e-64dc-4bd0-bf5c-fdf82c8250da" (UID: "0969260e-64dc-4bd0-bf5c-fdf82c8250da"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:18:30.240858 containerd[1952]: time="2025-03-25T01:18:30.239964986Z" level=info msg="RemoveContainer for \"6390845a1c19b7f50fdec224b0f97d27ba4786baac7ee89aff690277a90c6f22\"" Mar 25 01:18:30.241262 kubelet[3231]: I0325 01:18:30.241171 3231 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33fcaa15-597f-4e8a-ba47-2c25cdf67270-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "33fcaa15-597f-4e8a-ba47-2c25cdf67270" (UID: "33fcaa15-597f-4e8a-ba47-2c25cdf67270"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 25 01:18:30.242222 kubelet[3231]: I0325 01:18:30.242116 3231 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0969260e-64dc-4bd0-bf5c-fdf82c8250da-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0969260e-64dc-4bd0-bf5c-fdf82c8250da" (UID: "0969260e-64dc-4bd0-bf5c-fdf82c8250da"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 25 01:18:30.243658 kubelet[3231]: I0325 01:18:30.242306 3231 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0969260e-64dc-4bd0-bf5c-fdf82c8250da-kube-api-access-gk282" (OuterVolumeSpecName: "kube-api-access-gk282") pod "0969260e-64dc-4bd0-bf5c-fdf82c8250da" (UID: "0969260e-64dc-4bd0-bf5c-fdf82c8250da"). InnerVolumeSpecName "kube-api-access-gk282". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 25 01:18:30.248348 kubelet[3231]: I0325 01:18:30.247571 3231 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33fcaa15-597f-4e8a-ba47-2c25cdf67270-kube-api-access-7d97d" (OuterVolumeSpecName: "kube-api-access-7d97d") pod "33fcaa15-597f-4e8a-ba47-2c25cdf67270" (UID: "33fcaa15-597f-4e8a-ba47-2c25cdf67270"). InnerVolumeSpecName "kube-api-access-7d97d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 25 01:18:30.253325 kubelet[3231]: I0325 01:18:30.253144 3231 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0969260e-64dc-4bd0-bf5c-fdf82c8250da-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0969260e-64dc-4bd0-bf5c-fdf82c8250da" (UID: "0969260e-64dc-4bd0-bf5c-fdf82c8250da"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 25 01:18:30.258668 containerd[1952]: time="2025-03-25T01:18:30.258431570Z" level=info msg="RemoveContainer for \"6390845a1c19b7f50fdec224b0f97d27ba4786baac7ee89aff690277a90c6f22\" returns successfully" Mar 25 01:18:30.259429 kubelet[3231]: I0325 01:18:30.259177 3231 scope.go:117] "RemoveContainer" containerID="812f9149f810ab1d866d5988a5a1b5a36b2481d81c229a616b28d73714371dd8" Mar 25 01:18:30.265021 containerd[1952]: time="2025-03-25T01:18:30.264806450Z" level=info msg="RemoveContainer for \"812f9149f810ab1d866d5988a5a1b5a36b2481d81c229a616b28d73714371dd8\"" Mar 25 01:18:30.274317 containerd[1952]: time="2025-03-25T01:18:30.274261190Z" level=info msg="RemoveContainer for \"812f9149f810ab1d866d5988a5a1b5a36b2481d81c229a616b28d73714371dd8\" returns successfully" Mar 25 01:18:30.275613 kubelet[3231]: I0325 01:18:30.274873 3231 scope.go:117] "RemoveContainer" containerID="7238cba5797a51305268e93b4ac2451be285f8d148307c5bb2ee2ce0917a2f4d" Mar 25 01:18:30.280522 containerd[1952]: time="2025-03-25T01:18:30.280475594Z" level=info msg="RemoveContainer for \"7238cba5797a51305268e93b4ac2451be285f8d148307c5bb2ee2ce0917a2f4d\"" Mar 25 01:18:30.290185 containerd[1952]: time="2025-03-25T01:18:30.290063534Z" level=info msg="RemoveContainer for \"7238cba5797a51305268e93b4ac2451be285f8d148307c5bb2ee2ce0917a2f4d\" returns successfully" Mar 25 01:18:30.290574 kubelet[3231]: I0325 01:18:30.290499 3231 scope.go:117] "RemoveContainer" containerID="bb7c7f7624aee87dbf4f2efe5ecf8fc97eb916f6d082116ea3f83b05738909c3" Mar 25 01:18:30.293826 containerd[1952]: time="2025-03-25T01:18:30.293120894Z" level=info msg="RemoveContainer for \"bb7c7f7624aee87dbf4f2efe5ecf8fc97eb916f6d082116ea3f83b05738909c3\"" Mar 25 01:18:30.301665 containerd[1952]: time="2025-03-25T01:18:30.301591538Z" level=info msg="RemoveContainer for \"bb7c7f7624aee87dbf4f2efe5ecf8fc97eb916f6d082116ea3f83b05738909c3\" returns successfully" Mar 25 01:18:30.302118 kubelet[3231]: I0325 01:18:30.301927 3231 scope.go:117] "RemoveContainer" containerID="fdff90c58d3476db8c6e1e5aefe7609224803961a1de23e0fa383d45be1e5638" Mar 25 01:18:30.304753 containerd[1952]: time="2025-03-25T01:18:30.304592102Z" level=info msg="RemoveContainer for \"fdff90c58d3476db8c6e1e5aefe7609224803961a1de23e0fa383d45be1e5638\"" Mar 25 01:18:30.316359 containerd[1952]: time="2025-03-25T01:18:30.314541470Z" level=info msg="RemoveContainer for \"fdff90c58d3476db8c6e1e5aefe7609224803961a1de23e0fa383d45be1e5638\" returns successfully" Mar 25 01:18:30.316359 containerd[1952]: time="2025-03-25T01:18:30.315460814Z" level=error msg="ContainerStatus for \"6390845a1c19b7f50fdec224b0f97d27ba4786baac7ee89aff690277a90c6f22\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6390845a1c19b7f50fdec224b0f97d27ba4786baac7ee89aff690277a90c6f22\": not found" Mar 25 01:18:30.316565 kubelet[3231]: I0325 01:18:30.314920 3231 scope.go:117] "RemoveContainer" containerID="6390845a1c19b7f50fdec224b0f97d27ba4786baac7ee89aff690277a90c6f22" Mar 25 01:18:30.316565 kubelet[3231]: E0325 01:18:30.315851 3231 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6390845a1c19b7f50fdec224b0f97d27ba4786baac7ee89aff690277a90c6f22\": not found" containerID="6390845a1c19b7f50fdec224b0f97d27ba4786baac7ee89aff690277a90c6f22" Mar 25 01:18:30.316565 kubelet[3231]: I0325 01:18:30.315931 3231 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6390845a1c19b7f50fdec224b0f97d27ba4786baac7ee89aff690277a90c6f22"} err="failed to get container status \"6390845a1c19b7f50fdec224b0f97d27ba4786baac7ee89aff690277a90c6f22\": rpc error: code = NotFound desc = an error occurred when try to find container \"6390845a1c19b7f50fdec224b0f97d27ba4786baac7ee89aff690277a90c6f22\": not found" Mar 25 01:18:30.316565 kubelet[3231]: I0325 01:18:30.316116 3231 scope.go:117] "RemoveContainer" containerID="812f9149f810ab1d866d5988a5a1b5a36b2481d81c229a616b28d73714371dd8" Mar 25 01:18:30.316775 containerd[1952]: time="2025-03-25T01:18:30.316501442Z" level=error msg="ContainerStatus for \"812f9149f810ab1d866d5988a5a1b5a36b2481d81c229a616b28d73714371dd8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"812f9149f810ab1d866d5988a5a1b5a36b2481d81c229a616b28d73714371dd8\": not found" Mar 25 01:18:30.316842 kubelet[3231]: E0325 01:18:30.316724 3231 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"812f9149f810ab1d866d5988a5a1b5a36b2481d81c229a616b28d73714371dd8\": not found" containerID="812f9149f810ab1d866d5988a5a1b5a36b2481d81c229a616b28d73714371dd8" Mar 25 01:18:30.316842 kubelet[3231]: I0325 01:18:30.316785 3231 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"812f9149f810ab1d866d5988a5a1b5a36b2481d81c229a616b28d73714371dd8"} err="failed to get container status \"812f9149f810ab1d866d5988a5a1b5a36b2481d81c229a616b28d73714371dd8\": rpc error: code = NotFound desc = an error occurred when try to find container \"812f9149f810ab1d866d5988a5a1b5a36b2481d81c229a616b28d73714371dd8\": not found" Mar 25 01:18:30.316842 kubelet[3231]: I0325 01:18:30.316821 3231 scope.go:117] "RemoveContainer" containerID="7238cba5797a51305268e93b4ac2451be285f8d148307c5bb2ee2ce0917a2f4d" Mar 25 01:18:30.318132 containerd[1952]: time="2025-03-25T01:18:30.317314322Z" level=error msg="ContainerStatus for \"7238cba5797a51305268e93b4ac2451be285f8d148307c5bb2ee2ce0917a2f4d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7238cba5797a51305268e93b4ac2451be285f8d148307c5bb2ee2ce0917a2f4d\": not found" Mar 25 01:18:30.318416 kubelet[3231]: E0325 01:18:30.317860 3231 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7238cba5797a51305268e93b4ac2451be285f8d148307c5bb2ee2ce0917a2f4d\": not found" containerID="7238cba5797a51305268e93b4ac2451be285f8d148307c5bb2ee2ce0917a2f4d" Mar 25 01:18:30.318416 kubelet[3231]: I0325 01:18:30.317908 3231 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7238cba5797a51305268e93b4ac2451be285f8d148307c5bb2ee2ce0917a2f4d"} err="failed to get container status \"7238cba5797a51305268e93b4ac2451be285f8d148307c5bb2ee2ce0917a2f4d\": rpc error: code = NotFound desc = an error occurred when try to find container \"7238cba5797a51305268e93b4ac2451be285f8d148307c5bb2ee2ce0917a2f4d\": not found" Mar 25 01:18:30.318416 kubelet[3231]: I0325 01:18:30.317950 3231 scope.go:117] "RemoveContainer" containerID="bb7c7f7624aee87dbf4f2efe5ecf8fc97eb916f6d082116ea3f83b05738909c3" Mar 25 01:18:30.319054 containerd[1952]: time="2025-03-25T01:18:30.319001858Z" level=error msg="ContainerStatus for \"bb7c7f7624aee87dbf4f2efe5ecf8fc97eb916f6d082116ea3f83b05738909c3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bb7c7f7624aee87dbf4f2efe5ecf8fc97eb916f6d082116ea3f83b05738909c3\": not found" Mar 25 01:18:30.319760 kubelet[3231]: E0325 01:18:30.319568 3231 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bb7c7f7624aee87dbf4f2efe5ecf8fc97eb916f6d082116ea3f83b05738909c3\": not found" containerID="bb7c7f7624aee87dbf4f2efe5ecf8fc97eb916f6d082116ea3f83b05738909c3" Mar 25 01:18:30.319760 kubelet[3231]: I0325 01:18:30.319617 3231 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bb7c7f7624aee87dbf4f2efe5ecf8fc97eb916f6d082116ea3f83b05738909c3"} err="failed to get container status \"bb7c7f7624aee87dbf4f2efe5ecf8fc97eb916f6d082116ea3f83b05738909c3\": rpc error: code = NotFound desc = an error occurred when try to find container \"bb7c7f7624aee87dbf4f2efe5ecf8fc97eb916f6d082116ea3f83b05738909c3\": not found" Mar 25 01:18:30.319760 kubelet[3231]: I0325 01:18:30.319650 3231 scope.go:117] "RemoveContainer" containerID="fdff90c58d3476db8c6e1e5aefe7609224803961a1de23e0fa383d45be1e5638" Mar 25 01:18:30.320305 containerd[1952]: time="2025-03-25T01:18:30.320167634Z" level=error msg="ContainerStatus for \"fdff90c58d3476db8c6e1e5aefe7609224803961a1de23e0fa383d45be1e5638\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fdff90c58d3476db8c6e1e5aefe7609224803961a1de23e0fa383d45be1e5638\": not found" Mar 25 01:18:30.320564 kubelet[3231]: E0325 01:18:30.320518 3231 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fdff90c58d3476db8c6e1e5aefe7609224803961a1de23e0fa383d45be1e5638\": not found" containerID="fdff90c58d3476db8c6e1e5aefe7609224803961a1de23e0fa383d45be1e5638" Mar 25 01:18:30.320645 kubelet[3231]: I0325 01:18:30.320569 3231 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fdff90c58d3476db8c6e1e5aefe7609224803961a1de23e0fa383d45be1e5638"} err="failed to get container status \"fdff90c58d3476db8c6e1e5aefe7609224803961a1de23e0fa383d45be1e5638\": rpc error: code = NotFound desc = an error occurred when try to find container \"fdff90c58d3476db8c6e1e5aefe7609224803961a1de23e0fa383d45be1e5638\": not found" Mar 25 01:18:30.320645 kubelet[3231]: I0325 01:18:30.320613 3231 scope.go:117] "RemoveContainer" containerID="8b0b70370e494cf0cf055e2212061f3444dfe1f95741263ec065a40b15180c78" Mar 25 01:18:30.323886 containerd[1952]: time="2025-03-25T01:18:30.323744114Z" level=info msg="RemoveContainer for \"8b0b70370e494cf0cf055e2212061f3444dfe1f95741263ec065a40b15180c78\"" Mar 25 01:18:30.326784 kubelet[3231]: I0325 01:18:30.326416 3231 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0969260e-64dc-4bd0-bf5c-fdf82c8250da-bpf-maps\") on node \"ip-172-31-23-121\" DevicePath \"\"" Mar 25 01:18:30.326784 kubelet[3231]: I0325 01:18:30.326460 3231 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0969260e-64dc-4bd0-bf5c-fdf82c8250da-host-proc-sys-net\") on node \"ip-172-31-23-121\" DevicePath \"\"" Mar 25 01:18:30.326784 kubelet[3231]: I0325 01:18:30.326481 3231 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0969260e-64dc-4bd0-bf5c-fdf82c8250da-hostproc\") on node \"ip-172-31-23-121\" DevicePath \"\"" Mar 25 01:18:30.326784 kubelet[3231]: I0325 01:18:30.326502 3231 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0969260e-64dc-4bd0-bf5c-fdf82c8250da-hubble-tls\") on node \"ip-172-31-23-121\" DevicePath \"\"" Mar 25 01:18:30.326784 kubelet[3231]: I0325 01:18:30.326523 3231 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0969260e-64dc-4bd0-bf5c-fdf82c8250da-clustermesh-secrets\") on node \"ip-172-31-23-121\" DevicePath \"\"" Mar 25 01:18:30.326784 kubelet[3231]: I0325 01:18:30.326543 3231 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/33fcaa15-597f-4e8a-ba47-2c25cdf67270-cilium-config-path\") on node \"ip-172-31-23-121\" DevicePath \"\"" Mar 25 01:18:30.326784 kubelet[3231]: I0325 01:18:30.326562 3231 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0969260e-64dc-4bd0-bf5c-fdf82c8250da-lib-modules\") on node \"ip-172-31-23-121\" DevicePath \"\"" Mar 25 01:18:30.326784 kubelet[3231]: I0325 01:18:30.326581 3231 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-gk282\" (UniqueName: \"kubernetes.io/projected/0969260e-64dc-4bd0-bf5c-fdf82c8250da-kube-api-access-gk282\") on node \"ip-172-31-23-121\" DevicePath \"\"" Mar 25 01:18:30.327273 kubelet[3231]: I0325 01:18:30.326601 3231 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0969260e-64dc-4bd0-bf5c-fdf82c8250da-cilium-config-path\") on node \"ip-172-31-23-121\" DevicePath \"\"" Mar 25 01:18:30.327273 kubelet[3231]: I0325 01:18:30.326622 3231 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0969260e-64dc-4bd0-bf5c-fdf82c8250da-xtables-lock\") on node \"ip-172-31-23-121\" DevicePath \"\"" Mar 25 01:18:30.327273 kubelet[3231]: I0325 01:18:30.326641 3231 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0969260e-64dc-4bd0-bf5c-fdf82c8250da-etc-cni-netd\") on node \"ip-172-31-23-121\" DevicePath \"\"" Mar 25 01:18:30.327273 kubelet[3231]: I0325 01:18:30.326662 3231 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0969260e-64dc-4bd0-bf5c-fdf82c8250da-cilium-cgroup\") on node \"ip-172-31-23-121\" DevicePath \"\"" Mar 25 01:18:30.327273 kubelet[3231]: I0325 01:18:30.326680 3231 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0969260e-64dc-4bd0-bf5c-fdf82c8250da-host-proc-sys-kernel\") on node \"ip-172-31-23-121\" DevicePath \"\"" Mar 25 01:18:30.327273 kubelet[3231]: I0325 01:18:30.326699 3231 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0969260e-64dc-4bd0-bf5c-fdf82c8250da-cilium-run\") on node \"ip-172-31-23-121\" DevicePath \"\"" Mar 25 01:18:30.327273 kubelet[3231]: I0325 01:18:30.326718 3231 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-7d97d\" (UniqueName: \"kubernetes.io/projected/33fcaa15-597f-4e8a-ba47-2c25cdf67270-kube-api-access-7d97d\") on node \"ip-172-31-23-121\" DevicePath \"\"" Mar 25 01:18:30.327273 kubelet[3231]: I0325 01:18:30.326737 3231 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0969260e-64dc-4bd0-bf5c-fdf82c8250da-cni-path\") on node \"ip-172-31-23-121\" DevicePath \"\"" Mar 25 01:18:30.330556 containerd[1952]: time="2025-03-25T01:18:30.330505178Z" level=info msg="RemoveContainer for \"8b0b70370e494cf0cf055e2212061f3444dfe1f95741263ec065a40b15180c78\" returns successfully" Mar 25 01:18:30.330864 kubelet[3231]: I0325 01:18:30.330823 3231 scope.go:117] "RemoveContainer" containerID="8b0b70370e494cf0cf055e2212061f3444dfe1f95741263ec065a40b15180c78" Mar 25 01:18:30.331314 containerd[1952]: time="2025-03-25T01:18:30.331170986Z" level=error msg="ContainerStatus for \"8b0b70370e494cf0cf055e2212061f3444dfe1f95741263ec065a40b15180c78\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8b0b70370e494cf0cf055e2212061f3444dfe1f95741263ec065a40b15180c78\": not found" Mar 25 01:18:30.331637 kubelet[3231]: E0325 01:18:30.331544 3231 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8b0b70370e494cf0cf055e2212061f3444dfe1f95741263ec065a40b15180c78\": not found" containerID="8b0b70370e494cf0cf055e2212061f3444dfe1f95741263ec065a40b15180c78" Mar 25 01:18:30.331637 kubelet[3231]: I0325 01:18:30.331598 3231 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8b0b70370e494cf0cf055e2212061f3444dfe1f95741263ec065a40b15180c78"} err="failed to get container status \"8b0b70370e494cf0cf055e2212061f3444dfe1f95741263ec065a40b15180c78\": rpc error: code = NotFound desc = an error occurred when try to find container \"8b0b70370e494cf0cf055e2212061f3444dfe1f95741263ec065a40b15180c78\": not found" Mar 25 01:18:30.536033 systemd[1]: Removed slice kubepods-burstable-pod0969260e_64dc_4bd0_bf5c_fdf82c8250da.slice - libcontainer container kubepods-burstable-pod0969260e_64dc_4bd0_bf5c_fdf82c8250da.slice. Mar 25 01:18:30.536330 systemd[1]: kubepods-burstable-pod0969260e_64dc_4bd0_bf5c_fdf82c8250da.slice: Consumed 14.279s CPU time, 123.8M memory peak, 152K read from disk, 16.1M written to disk. Mar 25 01:18:30.556404 systemd[1]: Removed slice kubepods-besteffort-pod33fcaa15_597f_4e8a_ba47_2c25cdf67270.slice - libcontainer container kubepods-besteffort-pod33fcaa15_597f_4e8a_ba47_2c25cdf67270.slice. Mar 25 01:18:30.895190 systemd[1]: var-lib-kubelet-pods-33fcaa15\x2d597f\x2d4e8a\x2dba47\x2d2c25cdf67270-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7d97d.mount: Deactivated successfully. Mar 25 01:18:30.895407 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6e0ac1efd4742bde33d676b94667f6daf9ea5292b401b3846ef47a5f51342946-shm.mount: Deactivated successfully. Mar 25 01:18:30.895547 systemd[1]: var-lib-kubelet-pods-0969260e\x2d64dc\x2d4bd0\x2dbf5c\x2dfdf82c8250da-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgk282.mount: Deactivated successfully. Mar 25 01:18:30.895683 systemd[1]: var-lib-kubelet-pods-0969260e\x2d64dc\x2d4bd0\x2dbf5c\x2dfdf82c8250da-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 25 01:18:30.895853 systemd[1]: var-lib-kubelet-pods-0969260e\x2d64dc\x2d4bd0\x2dbf5c\x2dfdf82c8250da-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 25 01:18:31.728281 sshd[4897]: Connection closed by 147.75.109.163 port 57018 Mar 25 01:18:31.729175 sshd-session[4895]: pam_unix(sshd:session): session closed for user core Mar 25 01:18:31.736037 systemd[1]: sshd@25-172.31.23.121:22-147.75.109.163:57018.service: Deactivated successfully. Mar 25 01:18:31.740522 systemd[1]: session-26.scope: Deactivated successfully. Mar 25 01:18:31.743346 systemd[1]: session-26.scope: Consumed 2.978s CPU time, 23.4M memory peak. Mar 25 01:18:31.744816 systemd-logind[1936]: Session 26 logged out. Waiting for processes to exit. Mar 25 01:18:31.746550 systemd-logind[1936]: Removed session 26. Mar 25 01:18:31.767540 systemd[1]: Started sshd@26-172.31.23.121:22-147.75.109.163:58660.service - OpenSSH per-connection server daemon (147.75.109.163:58660). Mar 25 01:18:31.769918 kubelet[3231]: I0325 01:18:31.768968 3231 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0969260e-64dc-4bd0-bf5c-fdf82c8250da" path="/var/lib/kubelet/pods/0969260e-64dc-4bd0-bf5c-fdf82c8250da/volumes" Mar 25 01:18:31.772614 kubelet[3231]: I0325 01:18:31.770875 3231 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33fcaa15-597f-4e8a-ba47-2c25cdf67270" path="/var/lib/kubelet/pods/33fcaa15-597f-4e8a-ba47-2c25cdf67270/volumes" Mar 25 01:18:31.935872 kubelet[3231]: E0325 01:18:31.935747 3231 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 25 01:18:31.964065 sshd[5049]: Accepted publickey for core from 147.75.109.163 port 58660 ssh2: RSA SHA256:PeYl6nTqnDkQzDdjcMK19FTRt4hKUhyBe4JyMaX2oCc Mar 25 01:18:31.966604 sshd-session[5049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:18:31.974806 systemd-logind[1936]: New session 27 of user core. Mar 25 01:18:31.983472 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 25 01:18:32.253346 ntpd[1929]: Deleting interface #11 lxc_health, fe80::2c1b:91ff:fee3:78de%8#123, interface stats: received=0, sent=0, dropped=0, active_time=80 secs Mar 25 01:18:32.254516 ntpd[1929]: 25 Mar 01:18:32 ntpd[1929]: Deleting interface #11 lxc_health, fe80::2c1b:91ff:fee3:78de%8#123, interface stats: received=0, sent=0, dropped=0, active_time=80 secs Mar 25 01:18:33.035313 sshd[5051]: Connection closed by 147.75.109.163 port 58660 Mar 25 01:18:33.039453 sshd-session[5049]: pam_unix(sshd:session): session closed for user core Mar 25 01:18:33.057330 kubelet[3231]: E0325 01:18:33.056841 3231 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="33fcaa15-597f-4e8a-ba47-2c25cdf67270" containerName="cilium-operator" Mar 25 01:18:33.057330 kubelet[3231]: E0325 01:18:33.056888 3231 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0969260e-64dc-4bd0-bf5c-fdf82c8250da" containerName="clean-cilium-state" Mar 25 01:18:33.057330 kubelet[3231]: E0325 01:18:33.056917 3231 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0969260e-64dc-4bd0-bf5c-fdf82c8250da" containerName="cilium-agent" Mar 25 01:18:33.057330 kubelet[3231]: E0325 01:18:33.056935 3231 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0969260e-64dc-4bd0-bf5c-fdf82c8250da" containerName="mount-cgroup" Mar 25 01:18:33.057330 kubelet[3231]: E0325 01:18:33.056954 3231 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0969260e-64dc-4bd0-bf5c-fdf82c8250da" containerName="apply-sysctl-overwrites" Mar 25 01:18:33.057330 kubelet[3231]: E0325 01:18:33.057111 3231 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0969260e-64dc-4bd0-bf5c-fdf82c8250da" containerName="mount-bpf-fs" Mar 25 01:18:33.060018 systemd[1]: sshd@26-172.31.23.121:22-147.75.109.163:58660.service: Deactivated successfully. Mar 25 01:18:33.064482 kubelet[3231]: I0325 01:18:33.057225 3231 memory_manager.go:354] "RemoveStaleState removing state" podUID="0969260e-64dc-4bd0-bf5c-fdf82c8250da" containerName="cilium-agent" Mar 25 01:18:33.064482 kubelet[3231]: I0325 01:18:33.061808 3231 memory_manager.go:354] "RemoveStaleState removing state" podUID="33fcaa15-597f-4e8a-ba47-2c25cdf67270" containerName="cilium-operator" Mar 25 01:18:33.068023 systemd[1]: session-27.scope: Deactivated successfully. Mar 25 01:18:33.072766 systemd-logind[1936]: Session 27 logged out. Waiting for processes to exit. Mar 25 01:18:33.102517 systemd[1]: Started sshd@27-172.31.23.121:22-147.75.109.163:58668.service - OpenSSH per-connection server daemon (147.75.109.163:58668). Mar 25 01:18:33.105804 systemd-logind[1936]: Removed session 27. Mar 25 01:18:33.133537 systemd[1]: Created slice kubepods-burstable-podb9bc0aad_5e0c_473c_93ea_ca01d16bb312.slice - libcontainer container kubepods-burstable-podb9bc0aad_5e0c_473c_93ea_ca01d16bb312.slice. Mar 25 01:18:33.143175 kubelet[3231]: I0325 01:18:33.141667 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b9bc0aad-5e0c-473c-93ea-ca01d16bb312-cilium-config-path\") pod \"cilium-9s27h\" (UID: \"b9bc0aad-5e0c-473c-93ea-ca01d16bb312\") " pod="kube-system/cilium-9s27h" Mar 25 01:18:33.143175 kubelet[3231]: I0325 01:18:33.141744 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b9bc0aad-5e0c-473c-93ea-ca01d16bb312-etc-cni-netd\") pod \"cilium-9s27h\" (UID: \"b9bc0aad-5e0c-473c-93ea-ca01d16bb312\") " pod="kube-system/cilium-9s27h" Mar 25 01:18:33.143175 kubelet[3231]: I0325 01:18:33.141782 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b9bc0aad-5e0c-473c-93ea-ca01d16bb312-host-proc-sys-net\") pod \"cilium-9s27h\" (UID: \"b9bc0aad-5e0c-473c-93ea-ca01d16bb312\") " pod="kube-system/cilium-9s27h" Mar 25 01:18:33.143175 kubelet[3231]: I0325 01:18:33.141817 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b9bc0aad-5e0c-473c-93ea-ca01d16bb312-host-proc-sys-kernel\") pod \"cilium-9s27h\" (UID: \"b9bc0aad-5e0c-473c-93ea-ca01d16bb312\") " pod="kube-system/cilium-9s27h" Mar 25 01:18:33.143175 kubelet[3231]: I0325 01:18:33.141855 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b9bc0aad-5e0c-473c-93ea-ca01d16bb312-clustermesh-secrets\") pod \"cilium-9s27h\" (UID: \"b9bc0aad-5e0c-473c-93ea-ca01d16bb312\") " pod="kube-system/cilium-9s27h" Mar 25 01:18:33.143579 kubelet[3231]: I0325 01:18:33.141890 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9bc0aad-5e0c-473c-93ea-ca01d16bb312-xtables-lock\") pod \"cilium-9s27h\" (UID: \"b9bc0aad-5e0c-473c-93ea-ca01d16bb312\") " pod="kube-system/cilium-9s27h" Mar 25 01:18:33.143579 kubelet[3231]: I0325 01:18:33.141929 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b9bc0aad-5e0c-473c-93ea-ca01d16bb312-cilium-run\") pod \"cilium-9s27h\" (UID: \"b9bc0aad-5e0c-473c-93ea-ca01d16bb312\") " pod="kube-system/cilium-9s27h" Mar 25 01:18:33.143579 kubelet[3231]: I0325 01:18:33.141968 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b9bc0aad-5e0c-473c-93ea-ca01d16bb312-hostproc\") pod \"cilium-9s27h\" (UID: \"b9bc0aad-5e0c-473c-93ea-ca01d16bb312\") " pod="kube-system/cilium-9s27h" Mar 25 01:18:33.143579 kubelet[3231]: I0325 01:18:33.142016 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b9bc0aad-5e0c-473c-93ea-ca01d16bb312-hubble-tls\") pod \"cilium-9s27h\" (UID: \"b9bc0aad-5e0c-473c-93ea-ca01d16bb312\") " pod="kube-system/cilium-9s27h" Mar 25 01:18:33.143579 kubelet[3231]: I0325 01:18:33.142055 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b9bc0aad-5e0c-473c-93ea-ca01d16bb312-bpf-maps\") pod \"cilium-9s27h\" (UID: \"b9bc0aad-5e0c-473c-93ea-ca01d16bb312\") " pod="kube-system/cilium-9s27h" Mar 25 01:18:33.143579 kubelet[3231]: I0325 01:18:33.142090 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b9bc0aad-5e0c-473c-93ea-ca01d16bb312-cilium-cgroup\") pod \"cilium-9s27h\" (UID: \"b9bc0aad-5e0c-473c-93ea-ca01d16bb312\") " pod="kube-system/cilium-9s27h" Mar 25 01:18:33.143867 kubelet[3231]: I0325 01:18:33.142122 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b9bc0aad-5e0c-473c-93ea-ca01d16bb312-cni-path\") pod \"cilium-9s27h\" (UID: \"b9bc0aad-5e0c-473c-93ea-ca01d16bb312\") " pod="kube-system/cilium-9s27h" Mar 25 01:18:33.143867 kubelet[3231]: I0325 01:18:33.142158 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9bc0aad-5e0c-473c-93ea-ca01d16bb312-lib-modules\") pod \"cilium-9s27h\" (UID: \"b9bc0aad-5e0c-473c-93ea-ca01d16bb312\") " pod="kube-system/cilium-9s27h" Mar 25 01:18:33.143867 kubelet[3231]: I0325 01:18:33.142191 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b9bc0aad-5e0c-473c-93ea-ca01d16bb312-cilium-ipsec-secrets\") pod \"cilium-9s27h\" (UID: \"b9bc0aad-5e0c-473c-93ea-ca01d16bb312\") " pod="kube-system/cilium-9s27h" Mar 25 01:18:33.143867 kubelet[3231]: I0325 01:18:33.142249 3231 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m69zn\" (UniqueName: \"kubernetes.io/projected/b9bc0aad-5e0c-473c-93ea-ca01d16bb312-kube-api-access-m69zn\") pod \"cilium-9s27h\" (UID: \"b9bc0aad-5e0c-473c-93ea-ca01d16bb312\") " pod="kube-system/cilium-9s27h" Mar 25 01:18:33.336518 sshd[5061]: Accepted publickey for core from 147.75.109.163 port 58668 ssh2: RSA SHA256:PeYl6nTqnDkQzDdjcMK19FTRt4hKUhyBe4JyMaX2oCc Mar 25 01:18:33.339950 sshd-session[5061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:18:33.349717 systemd-logind[1936]: New session 28 of user core. Mar 25 01:18:33.354475 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 25 01:18:33.456407 containerd[1952]: time="2025-03-25T01:18:33.456330954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9s27h,Uid:b9bc0aad-5e0c-473c-93ea-ca01d16bb312,Namespace:kube-system,Attempt:0,}" Mar 25 01:18:33.484358 sshd[5068]: Connection closed by 147.75.109.163 port 58668 Mar 25 01:18:33.483542 sshd-session[5061]: pam_unix(sshd:session): session closed for user core Mar 25 01:18:33.497136 systemd[1]: sshd@27-172.31.23.121:22-147.75.109.163:58668.service: Deactivated successfully. Mar 25 01:18:33.499241 containerd[1952]: time="2025-03-25T01:18:33.498004482Z" level=info msg="connecting to shim 655899f9d8994501be12eff08e4c8a45d36a7063a5305734d262c765b808dc8c" address="unix:///run/containerd/s/581cce634a2b1fb0fd23d26f38519a5bc75299a7c0e1b2df18ac6825900b3f14" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:18:33.503964 systemd[1]: session-28.scope: Deactivated successfully. Mar 25 01:18:33.507918 systemd-logind[1936]: Session 28 logged out. Waiting for processes to exit. Mar 25 01:18:33.544322 systemd-logind[1936]: Removed session 28. Mar 25 01:18:33.551527 systemd[1]: Started cri-containerd-655899f9d8994501be12eff08e4c8a45d36a7063a5305734d262c765b808dc8c.scope - libcontainer container 655899f9d8994501be12eff08e4c8a45d36a7063a5305734d262c765b808dc8c. Mar 25 01:18:33.557644 systemd[1]: Started sshd@28-172.31.23.121:22-147.75.109.163:58682.service - OpenSSH per-connection server daemon (147.75.109.163:58682). Mar 25 01:18:33.614399 containerd[1952]: time="2025-03-25T01:18:33.614161471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9s27h,Uid:b9bc0aad-5e0c-473c-93ea-ca01d16bb312,Namespace:kube-system,Attempt:0,} returns sandbox id \"655899f9d8994501be12eff08e4c8a45d36a7063a5305734d262c765b808dc8c\"" Mar 25 01:18:33.622834 containerd[1952]: time="2025-03-25T01:18:33.622366231Z" level=info msg="CreateContainer within sandbox \"655899f9d8994501be12eff08e4c8a45d36a7063a5305734d262c765b808dc8c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 25 01:18:33.639304 containerd[1952]: time="2025-03-25T01:18:33.639233335Z" level=info msg="Container ffde22ddf06cedb0bc0f5fb4d62d8186bcdd9f69a268ee6c23eada97072ba202: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:18:33.652135 containerd[1952]: time="2025-03-25T01:18:33.652056403Z" level=info msg="CreateContainer within sandbox \"655899f9d8994501be12eff08e4c8a45d36a7063a5305734d262c765b808dc8c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ffde22ddf06cedb0bc0f5fb4d62d8186bcdd9f69a268ee6c23eada97072ba202\"" Mar 25 01:18:33.654395 containerd[1952]: time="2025-03-25T01:18:33.654145999Z" level=info msg="StartContainer for \"ffde22ddf06cedb0bc0f5fb4d62d8186bcdd9f69a268ee6c23eada97072ba202\"" Mar 25 01:18:33.656548 containerd[1952]: time="2025-03-25T01:18:33.656441719Z" level=info msg="connecting to shim ffde22ddf06cedb0bc0f5fb4d62d8186bcdd9f69a268ee6c23eada97072ba202" address="unix:///run/containerd/s/581cce634a2b1fb0fd23d26f38519a5bc75299a7c0e1b2df18ac6825900b3f14" protocol=ttrpc version=3 Mar 25 01:18:33.693514 systemd[1]: Started cri-containerd-ffde22ddf06cedb0bc0f5fb4d62d8186bcdd9f69a268ee6c23eada97072ba202.scope - libcontainer container ffde22ddf06cedb0bc0f5fb4d62d8186bcdd9f69a268ee6c23eada97072ba202. Mar 25 01:18:33.747835 containerd[1952]: time="2025-03-25T01:18:33.747677107Z" level=info msg="StartContainer for \"ffde22ddf06cedb0bc0f5fb4d62d8186bcdd9f69a268ee6c23eada97072ba202\" returns successfully" Mar 25 01:18:33.766162 systemd[1]: cri-containerd-ffde22ddf06cedb0bc0f5fb4d62d8186bcdd9f69a268ee6c23eada97072ba202.scope: Deactivated successfully. Mar 25 01:18:33.775245 containerd[1952]: time="2025-03-25T01:18:33.775020320Z" level=info msg="received exit event container_id:\"ffde22ddf06cedb0bc0f5fb4d62d8186bcdd9f69a268ee6c23eada97072ba202\" id:\"ffde22ddf06cedb0bc0f5fb4d62d8186bcdd9f69a268ee6c23eada97072ba202\" pid:5135 exited_at:{seconds:1742865513 nanos:774372416}" Mar 25 01:18:33.775958 containerd[1952]: time="2025-03-25T01:18:33.775777760Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ffde22ddf06cedb0bc0f5fb4d62d8186bcdd9f69a268ee6c23eada97072ba202\" id:\"ffde22ddf06cedb0bc0f5fb4d62d8186bcdd9f69a268ee6c23eada97072ba202\" pid:5135 exited_at:{seconds:1742865513 nanos:774372416}" Mar 25 01:18:33.781493 sshd[5107]: Accepted publickey for core from 147.75.109.163 port 58682 ssh2: RSA SHA256:PeYl6nTqnDkQzDdjcMK19FTRt4hKUhyBe4JyMaX2oCc Mar 25 01:18:33.785121 sshd-session[5107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:18:33.797686 systemd-logind[1936]: New session 29 of user core. Mar 25 01:18:33.802113 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 25 01:18:34.275005 containerd[1952]: time="2025-03-25T01:18:34.272471310Z" level=info msg="CreateContainer within sandbox \"655899f9d8994501be12eff08e4c8a45d36a7063a5305734d262c765b808dc8c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 25 01:18:34.288538 containerd[1952]: time="2025-03-25T01:18:34.288485922Z" level=info msg="Container 922f3f020e7d6cce32562e27bbff8656b36d64ead4ee4092d086857e4b0b5359: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:18:34.309154 containerd[1952]: time="2025-03-25T01:18:34.308533530Z" level=info msg="CreateContainer within sandbox \"655899f9d8994501be12eff08e4c8a45d36a7063a5305734d262c765b808dc8c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"922f3f020e7d6cce32562e27bbff8656b36d64ead4ee4092d086857e4b0b5359\"" Mar 25 01:18:34.309975 containerd[1952]: time="2025-03-25T01:18:34.309794334Z" level=info msg="StartContainer for \"922f3f020e7d6cce32562e27bbff8656b36d64ead4ee4092d086857e4b0b5359\"" Mar 25 01:18:34.312881 containerd[1952]: time="2025-03-25T01:18:34.312801042Z" level=info msg="connecting to shim 922f3f020e7d6cce32562e27bbff8656b36d64ead4ee4092d086857e4b0b5359" address="unix:///run/containerd/s/581cce634a2b1fb0fd23d26f38519a5bc75299a7c0e1b2df18ac6825900b3f14" protocol=ttrpc version=3 Mar 25 01:18:34.353534 systemd[1]: Started cri-containerd-922f3f020e7d6cce32562e27bbff8656b36d64ead4ee4092d086857e4b0b5359.scope - libcontainer container 922f3f020e7d6cce32562e27bbff8656b36d64ead4ee4092d086857e4b0b5359. Mar 25 01:18:34.409995 containerd[1952]: time="2025-03-25T01:18:34.409529191Z" level=info msg="StartContainer for \"922f3f020e7d6cce32562e27bbff8656b36d64ead4ee4092d086857e4b0b5359\" returns successfully" Mar 25 01:18:34.420588 systemd[1]: cri-containerd-922f3f020e7d6cce32562e27bbff8656b36d64ead4ee4092d086857e4b0b5359.scope: Deactivated successfully. Mar 25 01:18:34.424169 containerd[1952]: time="2025-03-25T01:18:34.423712195Z" level=info msg="received exit event container_id:\"922f3f020e7d6cce32562e27bbff8656b36d64ead4ee4092d086857e4b0b5359\" id:\"922f3f020e7d6cce32562e27bbff8656b36d64ead4ee4092d086857e4b0b5359\" pid:5186 exited_at:{seconds:1742865514 nanos:423295075}" Mar 25 01:18:34.424169 containerd[1952]: time="2025-03-25T01:18:34.424065283Z" level=info msg="TaskExit event in podsandbox handler container_id:\"922f3f020e7d6cce32562e27bbff8656b36d64ead4ee4092d086857e4b0b5359\" id:\"922f3f020e7d6cce32562e27bbff8656b36d64ead4ee4092d086857e4b0b5359\" pid:5186 exited_at:{seconds:1742865514 nanos:423295075}" Mar 25 01:18:34.462185 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-922f3f020e7d6cce32562e27bbff8656b36d64ead4ee4092d086857e4b0b5359-rootfs.mount: Deactivated successfully. Mar 25 01:18:35.093561 kubelet[3231]: I0325 01:18:35.093497 3231 setters.go:600] "Node became not ready" node="ip-172-31-23-121" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-25T01:18:35Z","lastTransitionTime":"2025-03-25T01:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 25 01:18:35.278110 containerd[1952]: time="2025-03-25T01:18:35.278024611Z" level=info msg="CreateContainer within sandbox \"655899f9d8994501be12eff08e4c8a45d36a7063a5305734d262c765b808dc8c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 25 01:18:35.304241 containerd[1952]: time="2025-03-25T01:18:35.302055139Z" level=info msg="Container 618af5c2f1e561d61787f78cc8e5bc3210bfa248f4956ba37ce4e1a6d5951dcf: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:18:35.325044 containerd[1952]: time="2025-03-25T01:18:35.324897871Z" level=info msg="CreateContainer within sandbox \"655899f9d8994501be12eff08e4c8a45d36a7063a5305734d262c765b808dc8c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"618af5c2f1e561d61787f78cc8e5bc3210bfa248f4956ba37ce4e1a6d5951dcf\"" Mar 25 01:18:35.327635 containerd[1952]: time="2025-03-25T01:18:35.327534919Z" level=info msg="StartContainer for \"618af5c2f1e561d61787f78cc8e5bc3210bfa248f4956ba37ce4e1a6d5951dcf\"" Mar 25 01:18:35.332962 containerd[1952]: time="2025-03-25T01:18:35.332884339Z" level=info msg="connecting to shim 618af5c2f1e561d61787f78cc8e5bc3210bfa248f4956ba37ce4e1a6d5951dcf" address="unix:///run/containerd/s/581cce634a2b1fb0fd23d26f38519a5bc75299a7c0e1b2df18ac6825900b3f14" protocol=ttrpc version=3 Mar 25 01:18:35.374520 systemd[1]: Started cri-containerd-618af5c2f1e561d61787f78cc8e5bc3210bfa248f4956ba37ce4e1a6d5951dcf.scope - libcontainer container 618af5c2f1e561d61787f78cc8e5bc3210bfa248f4956ba37ce4e1a6d5951dcf. Mar 25 01:18:35.450552 containerd[1952]: time="2025-03-25T01:18:35.450470984Z" level=info msg="StartContainer for \"618af5c2f1e561d61787f78cc8e5bc3210bfa248f4956ba37ce4e1a6d5951dcf\" returns successfully" Mar 25 01:18:35.453703 systemd[1]: cri-containerd-618af5c2f1e561d61787f78cc8e5bc3210bfa248f4956ba37ce4e1a6d5951dcf.scope: Deactivated successfully. Mar 25 01:18:35.459790 containerd[1952]: time="2025-03-25T01:18:35.459609128Z" level=info msg="TaskExit event in podsandbox handler container_id:\"618af5c2f1e561d61787f78cc8e5bc3210bfa248f4956ba37ce4e1a6d5951dcf\" id:\"618af5c2f1e561d61787f78cc8e5bc3210bfa248f4956ba37ce4e1a6d5951dcf\" pid:5229 exited_at:{seconds:1742865515 nanos:459051344}" Mar 25 01:18:35.460342 containerd[1952]: time="2025-03-25T01:18:35.459731708Z" level=info msg="received exit event container_id:\"618af5c2f1e561d61787f78cc8e5bc3210bfa248f4956ba37ce4e1a6d5951dcf\" id:\"618af5c2f1e561d61787f78cc8e5bc3210bfa248f4956ba37ce4e1a6d5951dcf\" pid:5229 exited_at:{seconds:1742865515 nanos:459051344}" Mar 25 01:18:35.499957 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-618af5c2f1e561d61787f78cc8e5bc3210bfa248f4956ba37ce4e1a6d5951dcf-rootfs.mount: Deactivated successfully. Mar 25 01:18:35.662635 update_engine[1938]: I20250325 01:18:35.662478 1938 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 25 01:18:35.664309 update_engine[1938]: I20250325 01:18:35.663542 1938 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 25 01:18:35.664309 update_engine[1938]: I20250325 01:18:35.663815 1938 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 25 01:18:35.664949 update_engine[1938]: I20250325 01:18:35.664914 1938 omaha_request_params.cc:62] Current group set to alpha Mar 25 01:18:35.665170 update_engine[1938]: I20250325 01:18:35.665139 1938 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 25 01:18:35.666290 update_engine[1938]: I20250325 01:18:35.665273 1938 update_attempter.cc:643] Scheduling an action processor start. Mar 25 01:18:35.666290 update_engine[1938]: I20250325 01:18:35.665313 1938 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 25 01:18:35.666290 update_engine[1938]: I20250325 01:18:35.665380 1938 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 25 01:18:35.666290 update_engine[1938]: I20250325 01:18:35.665478 1938 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 25 01:18:35.666290 update_engine[1938]: I20250325 01:18:35.665498 1938 omaha_request_action.cc:272] Request: Mar 25 01:18:35.666290 update_engine[1938]: Mar 25 01:18:35.666290 update_engine[1938]: Mar 25 01:18:35.666290 update_engine[1938]: Mar 25 01:18:35.666290 update_engine[1938]: Mar 25 01:18:35.666290 update_engine[1938]: Mar 25 01:18:35.666290 update_engine[1938]: Mar 25 01:18:35.666290 update_engine[1938]: Mar 25 01:18:35.666290 update_engine[1938]: Mar 25 01:18:35.666290 update_engine[1938]: I20250325 01:18:35.665515 1938 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 25 01:18:35.666970 locksmithd[1975]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 25 01:18:35.668654 update_engine[1938]: I20250325 01:18:35.668592 1938 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 25 01:18:35.669183 update_engine[1938]: I20250325 01:18:35.669121 1938 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 25 01:18:35.700330 update_engine[1938]: E20250325 01:18:35.700249 1938 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 25 01:18:35.700471 update_engine[1938]: I20250325 01:18:35.700385 1938 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 25 01:18:36.286869 containerd[1952]: time="2025-03-25T01:18:36.286613648Z" level=info msg="CreateContainer within sandbox \"655899f9d8994501be12eff08e4c8a45d36a7063a5305734d262c765b808dc8c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 25 01:18:36.304370 containerd[1952]: time="2025-03-25T01:18:36.304301792Z" level=info msg="Container 8124ba6db2f215dc6f1fa46bc65d0f11291c33f80a582c3899cf0bbb932c8269: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:18:36.322808 containerd[1952]: time="2025-03-25T01:18:36.322734020Z" level=info msg="CreateContainer within sandbox \"655899f9d8994501be12eff08e4c8a45d36a7063a5305734d262c765b808dc8c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8124ba6db2f215dc6f1fa46bc65d0f11291c33f80a582c3899cf0bbb932c8269\"" Mar 25 01:18:36.323597 containerd[1952]: time="2025-03-25T01:18:36.323524352Z" level=info msg="StartContainer for \"8124ba6db2f215dc6f1fa46bc65d0f11291c33f80a582c3899cf0bbb932c8269\"" Mar 25 01:18:36.325339 containerd[1952]: time="2025-03-25T01:18:36.325179056Z" level=info msg="connecting to shim 8124ba6db2f215dc6f1fa46bc65d0f11291c33f80a582c3899cf0bbb932c8269" address="unix:///run/containerd/s/581cce634a2b1fb0fd23d26f38519a5bc75299a7c0e1b2df18ac6825900b3f14" protocol=ttrpc version=3 Mar 25 01:18:36.363482 systemd[1]: Started cri-containerd-8124ba6db2f215dc6f1fa46bc65d0f11291c33f80a582c3899cf0bbb932c8269.scope - libcontainer container 8124ba6db2f215dc6f1fa46bc65d0f11291c33f80a582c3899cf0bbb932c8269. Mar 25 01:18:36.408400 systemd[1]: cri-containerd-8124ba6db2f215dc6f1fa46bc65d0f11291c33f80a582c3899cf0bbb932c8269.scope: Deactivated successfully. Mar 25 01:18:36.410491 containerd[1952]: time="2025-03-25T01:18:36.410182377Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8124ba6db2f215dc6f1fa46bc65d0f11291c33f80a582c3899cf0bbb932c8269\" id:\"8124ba6db2f215dc6f1fa46bc65d0f11291c33f80a582c3899cf0bbb932c8269\" pid:5267 exited_at:{seconds:1742865516 nanos:409786401}" Mar 25 01:18:36.412343 containerd[1952]: time="2025-03-25T01:18:36.411639177Z" level=info msg="received exit event container_id:\"8124ba6db2f215dc6f1fa46bc65d0f11291c33f80a582c3899cf0bbb932c8269\" id:\"8124ba6db2f215dc6f1fa46bc65d0f11291c33f80a582c3899cf0bbb932c8269\" pid:5267 exited_at:{seconds:1742865516 nanos:409786401}" Mar 25 01:18:36.428872 containerd[1952]: time="2025-03-25T01:18:36.428798781Z" level=info msg="StartContainer for \"8124ba6db2f215dc6f1fa46bc65d0f11291c33f80a582c3899cf0bbb932c8269\" returns successfully" Mar 25 01:18:36.453591 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8124ba6db2f215dc6f1fa46bc65d0f11291c33f80a582c3899cf0bbb932c8269-rootfs.mount: Deactivated successfully. Mar 25 01:18:36.937136 kubelet[3231]: E0325 01:18:36.937071 3231 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 25 01:18:37.307331 containerd[1952]: time="2025-03-25T01:18:37.305294169Z" level=info msg="CreateContainer within sandbox \"655899f9d8994501be12eff08e4c8a45d36a7063a5305734d262c765b808dc8c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 25 01:18:37.334465 containerd[1952]: time="2025-03-25T01:18:37.330820977Z" level=info msg="Container 18310b6dc95e71e069a9e04cfb55a0224183c498575e69c15d4f074805872c28: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:18:37.353989 containerd[1952]: time="2025-03-25T01:18:37.353935977Z" level=info msg="CreateContainer within sandbox \"655899f9d8994501be12eff08e4c8a45d36a7063a5305734d262c765b808dc8c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"18310b6dc95e71e069a9e04cfb55a0224183c498575e69c15d4f074805872c28\"" Mar 25 01:18:37.355485 containerd[1952]: time="2025-03-25T01:18:37.355378941Z" level=info msg="StartContainer for \"18310b6dc95e71e069a9e04cfb55a0224183c498575e69c15d4f074805872c28\"" Mar 25 01:18:37.358115 containerd[1952]: time="2025-03-25T01:18:37.357998733Z" level=info msg="connecting to shim 18310b6dc95e71e069a9e04cfb55a0224183c498575e69c15d4f074805872c28" address="unix:///run/containerd/s/581cce634a2b1fb0fd23d26f38519a5bc75299a7c0e1b2df18ac6825900b3f14" protocol=ttrpc version=3 Mar 25 01:18:37.398524 systemd[1]: Started cri-containerd-18310b6dc95e71e069a9e04cfb55a0224183c498575e69c15d4f074805872c28.scope - libcontainer container 18310b6dc95e71e069a9e04cfb55a0224183c498575e69c15d4f074805872c28. Mar 25 01:18:37.461342 containerd[1952]: time="2025-03-25T01:18:37.461226382Z" level=info msg="StartContainer for \"18310b6dc95e71e069a9e04cfb55a0224183c498575e69c15d4f074805872c28\" returns successfully" Mar 25 01:18:37.612770 containerd[1952]: time="2025-03-25T01:18:37.612626999Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18310b6dc95e71e069a9e04cfb55a0224183c498575e69c15d4f074805872c28\" id:\"e869c697076de70c1b3e44ffff644aeb8c7edff65ed9e5ef59aec84cf7256f8c\" pid:5333 exited_at:{seconds:1742865517 nanos:611973743}" Mar 25 01:18:37.763955 kubelet[3231]: E0325 01:18:37.763598 3231 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-tphhc" podUID="db475e4d-c3d4-49a7-bde5-6879ed188281" Mar 25 01:18:38.284281 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 25 01:18:39.766245 kubelet[3231]: E0325 01:18:39.763561 3231 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-tphhc" podUID="db475e4d-c3d4-49a7-bde5-6879ed188281" Mar 25 01:18:40.606195 containerd[1952]: time="2025-03-25T01:18:40.606122521Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18310b6dc95e71e069a9e04cfb55a0224183c498575e69c15d4f074805872c28\" id:\"a2f641a3ad99c43d2565950c1cf9dbd62a799ab279f95a4f22f85d333800b5da\" pid:5488 exit_status:1 exited_at:{seconds:1742865520 nanos:605703445}" Mar 25 01:18:41.709182 containerd[1952]: time="2025-03-25T01:18:41.709080915Z" level=info msg="StopPodSandbox for \"6e0ac1efd4742bde33d676b94667f6daf9ea5292b401b3846ef47a5f51342946\"" Mar 25 01:18:41.710187 containerd[1952]: time="2025-03-25T01:18:41.709314315Z" level=info msg="TearDown network for sandbox \"6e0ac1efd4742bde33d676b94667f6daf9ea5292b401b3846ef47a5f51342946\" successfully" Mar 25 01:18:41.710187 containerd[1952]: time="2025-03-25T01:18:41.709342359Z" level=info msg="StopPodSandbox for \"6e0ac1efd4742bde33d676b94667f6daf9ea5292b401b3846ef47a5f51342946\" returns successfully" Mar 25 01:18:41.710187 containerd[1952]: time="2025-03-25T01:18:41.710097675Z" level=info msg="RemovePodSandbox for \"6e0ac1efd4742bde33d676b94667f6daf9ea5292b401b3846ef47a5f51342946\"" Mar 25 01:18:41.710187 containerd[1952]: time="2025-03-25T01:18:41.710142195Z" level=info msg="Forcibly stopping sandbox \"6e0ac1efd4742bde33d676b94667f6daf9ea5292b401b3846ef47a5f51342946\"" Mar 25 01:18:41.711185 containerd[1952]: time="2025-03-25T01:18:41.710315211Z" level=info msg="TearDown network for sandbox \"6e0ac1efd4742bde33d676b94667f6daf9ea5292b401b3846ef47a5f51342946\" successfully" Mar 25 01:18:41.713583 containerd[1952]: time="2025-03-25T01:18:41.713495139Z" level=info msg="Ensure that sandbox 6e0ac1efd4742bde33d676b94667f6daf9ea5292b401b3846ef47a5f51342946 in task-service has been cleanup successfully" Mar 25 01:18:41.720893 containerd[1952]: time="2025-03-25T01:18:41.720607383Z" level=info msg="RemovePodSandbox \"6e0ac1efd4742bde33d676b94667f6daf9ea5292b401b3846ef47a5f51342946\" returns successfully" Mar 25 01:18:41.721839 containerd[1952]: time="2025-03-25T01:18:41.721791303Z" level=info msg="StopPodSandbox for \"04d71d725d6fd78e94d33ff987ae2b3091111719e0191d0dd751ef843fe09bb3\"" Mar 25 01:18:41.722008 containerd[1952]: time="2025-03-25T01:18:41.721974567Z" level=info msg="TearDown network for sandbox \"04d71d725d6fd78e94d33ff987ae2b3091111719e0191d0dd751ef843fe09bb3\" successfully" Mar 25 01:18:41.722090 containerd[1952]: time="2025-03-25T01:18:41.722007567Z" level=info msg="StopPodSandbox for \"04d71d725d6fd78e94d33ff987ae2b3091111719e0191d0dd751ef843fe09bb3\" returns successfully" Mar 25 01:18:41.723611 containerd[1952]: time="2025-03-25T01:18:41.723274707Z" level=info msg="RemovePodSandbox for \"04d71d725d6fd78e94d33ff987ae2b3091111719e0191d0dd751ef843fe09bb3\"" Mar 25 01:18:41.723611 containerd[1952]: time="2025-03-25T01:18:41.723333267Z" level=info msg="Forcibly stopping sandbox \"04d71d725d6fd78e94d33ff987ae2b3091111719e0191d0dd751ef843fe09bb3\"" Mar 25 01:18:41.723611 containerd[1952]: time="2025-03-25T01:18:41.723494235Z" level=info msg="TearDown network for sandbox \"04d71d725d6fd78e94d33ff987ae2b3091111719e0191d0dd751ef843fe09bb3\" successfully" Mar 25 01:18:41.726638 containerd[1952]: time="2025-03-25T01:18:41.726551379Z" level=info msg="Ensure that sandbox 04d71d725d6fd78e94d33ff987ae2b3091111719e0191d0dd751ef843fe09bb3 in task-service has been cleanup successfully" Mar 25 01:18:41.733537 containerd[1952]: time="2025-03-25T01:18:41.733307187Z" level=info msg="RemovePodSandbox \"04d71d725d6fd78e94d33ff987ae2b3091111719e0191d0dd751ef843fe09bb3\" returns successfully" Mar 25 01:18:41.764745 kubelet[3231]: E0325 01:18:41.764538 3231 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-tphhc" podUID="db475e4d-c3d4-49a7-bde5-6879ed188281" Mar 25 01:18:42.506100 systemd-networkd[1856]: lxc_health: Link UP Mar 25 01:18:42.516669 systemd-networkd[1856]: lxc_health: Gained carrier Mar 25 01:18:42.521074 (udev-worker)[5820]: Network interface NamePolicy= disabled on kernel command line. Mar 25 01:18:42.986986 containerd[1952]: time="2025-03-25T01:18:42.986480861Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18310b6dc95e71e069a9e04cfb55a0224183c498575e69c15d4f074805872c28\" id:\"62cabb9715d6c95b6854cd0ca345946f923696eae9658f32d369143d4aa2ce1c\" pid:5847 exited_at:{seconds:1742865522 nanos:985538009}" Mar 25 01:18:43.493841 kubelet[3231]: I0325 01:18:43.493665 3231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9s27h" podStartSLOduration=10.493615168 podStartE2EDuration="10.493615168s" podCreationTimestamp="2025-03-25 01:18:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:18:38.345900862 +0000 UTC m=+116.846048284" watchObservedRunningTime="2025-03-25 01:18:43.493615168 +0000 UTC m=+121.993762746" Mar 25 01:18:43.767915 systemd-networkd[1856]: lxc_health: Gained IPv6LL Mar 25 01:18:45.356001 containerd[1952]: time="2025-03-25T01:18:45.355818557Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18310b6dc95e71e069a9e04cfb55a0224183c498575e69c15d4f074805872c28\" id:\"79316a48f3a96c7bb0464cfb17ed516cb342455ce3b92b9febcaeb6467b4f1df\" pid:5875 exited_at:{seconds:1742865525 nanos:355180253}" Mar 25 01:18:45.670603 update_engine[1938]: I20250325 01:18:45.670277 1938 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 25 01:18:45.671117 update_engine[1938]: I20250325 01:18:45.670681 1938 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 25 01:18:45.671117 update_engine[1938]: I20250325 01:18:45.671043 1938 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 25 01:18:45.672157 update_engine[1938]: E20250325 01:18:45.671656 1938 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 25 01:18:45.672157 update_engine[1938]: I20250325 01:18:45.671763 1938 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 25 01:18:46.253087 ntpd[1929]: Listen normally on 14 lxc_health [fe80::f4d7:c4ff:fe3e:e98d%14]:123 Mar 25 01:18:46.254008 ntpd[1929]: 25 Mar 01:18:46 ntpd[1929]: Listen normally on 14 lxc_health [fe80::f4d7:c4ff:fe3e:e98d%14]:123 Mar 25 01:18:47.585372 containerd[1952]: time="2025-03-25T01:18:47.585287216Z" level=info msg="TaskExit event in podsandbox handler container_id:\"18310b6dc95e71e069a9e04cfb55a0224183c498575e69c15d4f074805872c28\" id:\"d0a2cd92c833747d0e0f2dee25ea1c7672863866546d2da270359befbfd3123c\" pid:5903 exited_at:{seconds:1742865527 nanos:583551356}" Mar 25 01:18:47.617727 sshd[5166]: Connection closed by 147.75.109.163 port 58682 Mar 25 01:18:47.618266 sshd-session[5107]: pam_unix(sshd:session): session closed for user core Mar 25 01:18:47.628002 systemd[1]: sshd@28-172.31.23.121:22-147.75.109.163:58682.service: Deactivated successfully. Mar 25 01:18:47.638542 systemd[1]: session-29.scope: Deactivated successfully. Mar 25 01:18:47.640667 systemd-logind[1936]: Session 29 logged out. Waiting for processes to exit. Mar 25 01:18:47.644053 systemd-logind[1936]: Removed session 29. Mar 25 01:18:55.669990 update_engine[1938]: I20250325 01:18:55.669553 1938 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 25 01:18:55.671047 update_engine[1938]: I20250325 01:18:55.669891 1938 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 25 01:18:55.671523 update_engine[1938]: I20250325 01:18:55.671455 1938 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 25 01:18:55.672076 update_engine[1938]: E20250325 01:18:55.671950 1938 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 25 01:18:55.672076 update_engine[1938]: I20250325 01:18:55.672035 1938 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 25 01:19:02.766104 systemd[1]: cri-containerd-013b4d22050f439a0fc4a6f919916ee5e1e056998328843fa6017be178f6b7bb.scope: Deactivated successfully. Mar 25 01:19:02.767166 systemd[1]: cri-containerd-013b4d22050f439a0fc4a6f919916ee5e1e056998328843fa6017be178f6b7bb.scope: Consumed 5.275s CPU time, 53.8M memory peak. Mar 25 01:19:02.774034 containerd[1952]: time="2025-03-25T01:19:02.773840148Z" level=info msg="received exit event container_id:\"013b4d22050f439a0fc4a6f919916ee5e1e056998328843fa6017be178f6b7bb\" id:\"013b4d22050f439a0fc4a6f919916ee5e1e056998328843fa6017be178f6b7bb\" pid:3061 exit_status:1 exited_at:{seconds:1742865542 nanos:773470404}" Mar 25 01:19:02.774034 containerd[1952]: time="2025-03-25T01:19:02.773953452Z" level=info msg="TaskExit event in podsandbox handler container_id:\"013b4d22050f439a0fc4a6f919916ee5e1e056998328843fa6017be178f6b7bb\" id:\"013b4d22050f439a0fc4a6f919916ee5e1e056998328843fa6017be178f6b7bb\" pid:3061 exit_status:1 exited_at:{seconds:1742865542 nanos:773470404}" Mar 25 01:19:02.813311 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-013b4d22050f439a0fc4a6f919916ee5e1e056998328843fa6017be178f6b7bb-rootfs.mount: Deactivated successfully. Mar 25 01:19:03.380080 kubelet[3231]: I0325 01:19:03.379895 3231 scope.go:117] "RemoveContainer" containerID="013b4d22050f439a0fc4a6f919916ee5e1e056998328843fa6017be178f6b7bb" Mar 25 01:19:03.383643 containerd[1952]: time="2025-03-25T01:19:03.383563235Z" level=info msg="CreateContainer within sandbox \"39aff35496c84746f4f138361298a2725d76b2df60f08965d36993c248838796\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 25 01:19:03.398442 containerd[1952]: time="2025-03-25T01:19:03.398374967Z" level=info msg="Container 6594a52c32d67685e53f89c7fd60fc30a5a971ed4fc4b6130638496a8f489361: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:19:03.422278 containerd[1952]: time="2025-03-25T01:19:03.422171951Z" level=info msg="CreateContainer within sandbox \"39aff35496c84746f4f138361298a2725d76b2df60f08965d36993c248838796\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"6594a52c32d67685e53f89c7fd60fc30a5a971ed4fc4b6130638496a8f489361\"" Mar 25 01:19:03.423236 containerd[1952]: time="2025-03-25T01:19:03.423074999Z" level=info msg="StartContainer for \"6594a52c32d67685e53f89c7fd60fc30a5a971ed4fc4b6130638496a8f489361\"" Mar 25 01:19:03.425185 containerd[1952]: time="2025-03-25T01:19:03.425117279Z" level=info msg="connecting to shim 6594a52c32d67685e53f89c7fd60fc30a5a971ed4fc4b6130638496a8f489361" address="unix:///run/containerd/s/ac38895cf891cdb2a284ce4f625088732a3ab1be90883ca80939802882583d95" protocol=ttrpc version=3 Mar 25 01:19:03.465527 systemd[1]: Started cri-containerd-6594a52c32d67685e53f89c7fd60fc30a5a971ed4fc4b6130638496a8f489361.scope - libcontainer container 6594a52c32d67685e53f89c7fd60fc30a5a971ed4fc4b6130638496a8f489361. Mar 25 01:19:03.548443 containerd[1952]: time="2025-03-25T01:19:03.548024723Z" level=info msg="StartContainer for \"6594a52c32d67685e53f89c7fd60fc30a5a971ed4fc4b6130638496a8f489361\" returns successfully" Mar 25 01:19:05.535358 kubelet[3231]: E0325 01:19:05.534899 3231 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-121?timeout=10s\": context deadline exceeded" Mar 25 01:19:05.665113 update_engine[1938]: I20250325 01:19:05.664286 1938 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 25 01:19:05.665113 update_engine[1938]: I20250325 01:19:05.664691 1938 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 25 01:19:05.665113 update_engine[1938]: I20250325 01:19:05.665032 1938 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 25 01:19:05.668175 update_engine[1938]: E20250325 01:19:05.666388 1938 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 25 01:19:05.668175 update_engine[1938]: I20250325 01:19:05.666492 1938 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 25 01:19:05.668175 update_engine[1938]: I20250325 01:19:05.666515 1938 omaha_request_action.cc:617] Omaha request response: Mar 25 01:19:05.668175 update_engine[1938]: E20250325 01:19:05.666626 1938 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 25 01:19:05.668175 update_engine[1938]: I20250325 01:19:05.666659 1938 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 25 01:19:05.668175 update_engine[1938]: I20250325 01:19:05.666675 1938 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 25 01:19:05.668175 update_engine[1938]: I20250325 01:19:05.666690 1938 update_attempter.cc:306] Processing Done. Mar 25 01:19:05.668175 update_engine[1938]: E20250325 01:19:05.666718 1938 update_attempter.cc:619] Update failed. Mar 25 01:19:05.668175 update_engine[1938]: I20250325 01:19:05.666734 1938 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 25 01:19:05.668175 update_engine[1938]: I20250325 01:19:05.666749 1938 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 25 01:19:05.668175 update_engine[1938]: I20250325 01:19:05.666764 1938 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 25 01:19:05.668175 update_engine[1938]: I20250325 01:19:05.666869 1938 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 25 01:19:05.668175 update_engine[1938]: I20250325 01:19:05.666908 1938 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 25 01:19:05.668175 update_engine[1938]: I20250325 01:19:05.666925 1938 omaha_request_action.cc:272] Request: Mar 25 01:19:05.668175 update_engine[1938]: Mar 25 01:19:05.668175 update_engine[1938]: Mar 25 01:19:05.668970 update_engine[1938]: Mar 25 01:19:05.668970 update_engine[1938]: Mar 25 01:19:05.668970 update_engine[1938]: Mar 25 01:19:05.668970 update_engine[1938]: Mar 25 01:19:05.668970 update_engine[1938]: I20250325 01:19:05.666941 1938 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 25 01:19:05.668970 update_engine[1938]: I20250325 01:19:05.667197 1938 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 25 01:19:05.668970 update_engine[1938]: I20250325 01:19:05.667577 1938 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 25 01:19:05.668970 update_engine[1938]: E20250325 01:19:05.667904 1938 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 25 01:19:05.668970 update_engine[1938]: I20250325 01:19:05.667968 1938 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 25 01:19:05.668970 update_engine[1938]: I20250325 01:19:05.667986 1938 omaha_request_action.cc:617] Omaha request response: Mar 25 01:19:05.668970 update_engine[1938]: I20250325 01:19:05.668004 1938 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 25 01:19:05.668970 update_engine[1938]: I20250325 01:19:05.668018 1938 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 25 01:19:05.668970 update_engine[1938]: I20250325 01:19:05.668032 1938 update_attempter.cc:306] Processing Done. Mar 25 01:19:05.668970 update_engine[1938]: I20250325 01:19:05.668047 1938 update_attempter.cc:310] Error event sent. Mar 25 01:19:05.668970 update_engine[1938]: I20250325 01:19:05.668066 1938 update_check_scheduler.cc:74] Next update check in 47m44s Mar 25 01:19:05.670600 locksmithd[1975]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 25 01:19:05.670600 locksmithd[1975]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 25 01:19:07.619879 systemd[1]: cri-containerd-cd46281251f21d6444112b03a8ee263c15c339c011bdbd85e9ac685e3c552ecb.scope: Deactivated successfully. Mar 25 01:19:07.621187 systemd[1]: cri-containerd-cd46281251f21d6444112b03a8ee263c15c339c011bdbd85e9ac685e3c552ecb.scope: Consumed 3.434s CPU time, 20.8M memory peak. Mar 25 01:19:07.623021 containerd[1952]: time="2025-03-25T01:19:07.622973848Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cd46281251f21d6444112b03a8ee263c15c339c011bdbd85e9ac685e3c552ecb\" id:\"cd46281251f21d6444112b03a8ee263c15c339c011bdbd85e9ac685e3c552ecb\" pid:3076 exit_status:1 exited_at:{seconds:1742865547 nanos:620934328}" Mar 25 01:19:07.623873 containerd[1952]: time="2025-03-25T01:19:07.623049448Z" level=info msg="received exit event container_id:\"cd46281251f21d6444112b03a8ee263c15c339c011bdbd85e9ac685e3c552ecb\" id:\"cd46281251f21d6444112b03a8ee263c15c339c011bdbd85e9ac685e3c552ecb\" pid:3076 exit_status:1 exited_at:{seconds:1742865547 nanos:620934328}" Mar 25 01:19:07.662789 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd46281251f21d6444112b03a8ee263c15c339c011bdbd85e9ac685e3c552ecb-rootfs.mount: Deactivated successfully. Mar 25 01:19:08.399968 kubelet[3231]: I0325 01:19:08.399824 3231 scope.go:117] "RemoveContainer" containerID="cd46281251f21d6444112b03a8ee263c15c339c011bdbd85e9ac685e3c552ecb" Mar 25 01:19:08.402804 containerd[1952]: time="2025-03-25T01:19:08.402756448Z" level=info msg="CreateContainer within sandbox \"f8056cbfc2d40a7372347a589589128ac016493fd7127c493ba124690ab8d4c3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 25 01:19:08.423242 containerd[1952]: time="2025-03-25T01:19:08.420415264Z" level=info msg="Container 4934f9b370f32fc9ccd2663db1909556e2077241fe0375677a4e3ac03725d3ea: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:19:08.438804 containerd[1952]: time="2025-03-25T01:19:08.438731092Z" level=info msg="CreateContainer within sandbox \"f8056cbfc2d40a7372347a589589128ac016493fd7127c493ba124690ab8d4c3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"4934f9b370f32fc9ccd2663db1909556e2077241fe0375677a4e3ac03725d3ea\"" Mar 25 01:19:08.439918 containerd[1952]: time="2025-03-25T01:19:08.439505236Z" level=info msg="StartContainer for \"4934f9b370f32fc9ccd2663db1909556e2077241fe0375677a4e3ac03725d3ea\"" Mar 25 01:19:08.441712 containerd[1952]: time="2025-03-25T01:19:08.441645736Z" level=info msg="connecting to shim 4934f9b370f32fc9ccd2663db1909556e2077241fe0375677a4e3ac03725d3ea" address="unix:///run/containerd/s/df68cdaa707a7bdf8b24b71d23bf13c1b336ff04d73c5f5644c82e04b20ba740" protocol=ttrpc version=3 Mar 25 01:19:08.477520 systemd[1]: Started cri-containerd-4934f9b370f32fc9ccd2663db1909556e2077241fe0375677a4e3ac03725d3ea.scope - libcontainer container 4934f9b370f32fc9ccd2663db1909556e2077241fe0375677a4e3ac03725d3ea. Mar 25 01:19:08.559251 containerd[1952]: time="2025-03-25T01:19:08.559040896Z" level=info msg="StartContainer for \"4934f9b370f32fc9ccd2663db1909556e2077241fe0375677a4e3ac03725d3ea\" returns successfully" Mar 25 01:19:15.536348 kubelet[3231]: E0325 01:19:15.536121 3231 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-121?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"